00:00:00.001 Started by upstream project "autotest-per-patch" build number 132601 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:01.766 The recommended git tool is: git 00:00:01.766 using credential 00000000-0000-0000-0000-000000000002 00:00:01.768 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.782 Fetching changes from the remote Git repository 00:00:01.788 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.802 Using shallow fetch with depth 1 00:00:01.802 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.802 > git --version # timeout=10 00:00:01.814 > git --version # 'git version 2.39.2' 00:00:01.814 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.827 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.827 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.874 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.889 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.902 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.902 > git config core.sparsecheckout # timeout=10 00:00:06.916 > git read-tree -mu HEAD # timeout=10 00:00:06.934 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.961 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.961 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.073 [Pipeline] Start of Pipeline 00:00:07.089 [Pipeline] library 00:00:07.091 Loading library shm_lib@master 00:00:07.091 Library shm_lib@master is cached. Copying from home. 00:00:07.109 [Pipeline] node 00:00:07.119 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.121 [Pipeline] { 00:00:07.131 [Pipeline] catchError 00:00:07.132 [Pipeline] { 00:00:07.145 [Pipeline] wrap 00:00:07.154 [Pipeline] { 00:00:07.162 [Pipeline] stage 00:00:07.164 [Pipeline] { (Prologue) 00:00:07.358 [Pipeline] sh 00:00:07.647 + logger -p user.info -t JENKINS-CI 00:00:07.666 [Pipeline] echo 00:00:07.668 Node: CYP9 00:00:07.675 [Pipeline] sh 00:00:07.977 [Pipeline] setCustomBuildProperty 00:00:07.996 [Pipeline] echo 00:00:07.997 Cleanup processes 00:00:08.000 [Pipeline] sh 00:00:08.282 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.282 594326 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.297 [Pipeline] sh 00:00:08.586 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.586 ++ grep -v 'sudo pgrep' 00:00:08.586 ++ awk '{print $1}' 00:00:08.586 + sudo kill -9 00:00:08.586 + true 00:00:08.600 [Pipeline] cleanWs 00:00:08.609 [WS-CLEANUP] Deleting project workspace... 00:00:08.609 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.616 [WS-CLEANUP] done 00:00:08.620 [Pipeline] setCustomBuildProperty 00:00:08.632 [Pipeline] sh 00:00:08.917 + sudo git config --global --replace-all safe.directory '*' 00:00:09.017 [Pipeline] httpRequest 00:00:09.903 [Pipeline] echo 00:00:09.904 Sorcerer 10.211.164.20 is alive 00:00:09.909 [Pipeline] retry 00:00:09.910 [Pipeline] { 00:00:09.920 [Pipeline] httpRequest 00:00:09.924 HttpMethod: GET 00:00:09.924 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.924 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.931 Response Code: HTTP/1.1 200 OK 00:00:09.931 Success: Status code 200 is in the accepted range: 200,404 00:00:09.932 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.146 [Pipeline] } 00:00:28.166 [Pipeline] // retry 00:00:28.174 [Pipeline] sh 00:00:28.463 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.482 [Pipeline] httpRequest 00:00:28.875 [Pipeline] echo 00:00:28.877 Sorcerer 10.211.164.20 is alive 00:00:28.888 [Pipeline] retry 00:00:28.890 [Pipeline] { 00:00:28.906 [Pipeline] httpRequest 00:00:28.911 HttpMethod: GET 00:00:28.911 URL: http://10.211.164.20/packages/spdk_da516d86230ce35da1d9a947705cf5b25a324128.tar.gz 00:00:28.912 Sending request to url: http://10.211.164.20/packages/spdk_da516d86230ce35da1d9a947705cf5b25a324128.tar.gz 00:00:28.918 Response Code: HTTP/1.1 200 OK 00:00:28.919 Success: Status code 200 is in the accepted range: 200,404 00:00:28.919 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_da516d86230ce35da1d9a947705cf5b25a324128.tar.gz 00:04:17.773 [Pipeline] } 00:04:17.789 [Pipeline] // retry 00:04:17.797 [Pipeline] sh 00:04:18.133 + tar --no-same-owner -xf spdk_da516d86230ce35da1d9a947705cf5b25a324128.tar.gz 00:04:21.445 [Pipeline] sh 00:04:21.735 + git -C spdk log --oneline -n5 00:04:21.735 da516d862 bdev/nvme: Add lock to unprotected operations around attach controller 00:04:21.735 d0742f973 bdev/nvme: Add lock to unprotected operations around detach controller 00:04:21.735 0b658ecad bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:04:21.735 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:04:21.735 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:04:21.748 [Pipeline] } 00:04:21.759 [Pipeline] // stage 00:04:21.766 [Pipeline] stage 00:04:21.769 [Pipeline] { (Prepare) 00:04:21.786 [Pipeline] writeFile 00:04:21.801 [Pipeline] sh 00:04:22.091 + logger -p user.info -t JENKINS-CI 00:04:22.104 [Pipeline] sh 00:04:22.392 + logger -p user.info -t JENKINS-CI 00:04:22.405 [Pipeline] sh 00:04:22.693 + cat autorun-spdk.conf 00:04:22.693 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:22.693 SPDK_TEST_NVMF=1 00:04:22.693 SPDK_TEST_NVME_CLI=1 00:04:22.693 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:22.693 SPDK_TEST_NVMF_NICS=e810 00:04:22.693 SPDK_TEST_VFIOUSER=1 00:04:22.693 SPDK_RUN_UBSAN=1 00:04:22.693 NET_TYPE=phy 00:04:22.700 RUN_NIGHTLY=0 00:04:22.704 [Pipeline] readFile 00:04:22.730 [Pipeline] withEnv 00:04:22.732 [Pipeline] { 00:04:22.745 [Pipeline] sh 00:04:23.032 + set -ex 00:04:23.032 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:23.032 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:23.032 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:23.032 ++ SPDK_TEST_NVMF=1 00:04:23.032 ++ SPDK_TEST_NVME_CLI=1 00:04:23.032 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:23.032 ++ SPDK_TEST_NVMF_NICS=e810 00:04:23.032 ++ SPDK_TEST_VFIOUSER=1 00:04:23.032 ++ SPDK_RUN_UBSAN=1 00:04:23.032 ++ NET_TYPE=phy 00:04:23.032 ++ RUN_NIGHTLY=0 00:04:23.032 + case $SPDK_TEST_NVMF_NICS in 00:04:23.032 + DRIVERS=ice 00:04:23.032 + [[ tcp == \r\d\m\a ]] 00:04:23.032 + [[ -n ice ]] 00:04:23.032 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:23.032 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:23.032 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:23.032 rmmod: ERROR: Module irdma is not currently loaded 00:04:23.032 rmmod: ERROR: Module i40iw is not currently loaded 00:04:23.032 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:23.032 + true 00:04:23.032 + for D in $DRIVERS 00:04:23.032 + sudo modprobe ice 00:04:23.032 + exit 0 00:04:23.043 [Pipeline] } 00:04:23.058 [Pipeline] // withEnv 00:04:23.064 [Pipeline] } 00:04:23.077 [Pipeline] // stage 00:04:23.088 [Pipeline] catchError 00:04:23.089 [Pipeline] { 00:04:23.103 [Pipeline] timeout 00:04:23.104 Timeout set to expire in 1 hr 0 min 00:04:23.105 [Pipeline] { 00:04:23.119 [Pipeline] stage 00:04:23.122 [Pipeline] { (Tests) 00:04:23.138 [Pipeline] sh 00:04:23.428 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:23.428 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:23.428 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:23.428 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:23.428 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.428 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:23.428 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:23.428 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:23.428 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:23.428 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:23.428 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:23.428 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:23.428 + source /etc/os-release 00:04:23.428 ++ NAME='Fedora Linux' 00:04:23.428 ++ VERSION='39 (Cloud Edition)' 00:04:23.428 ++ ID=fedora 00:04:23.428 ++ VERSION_ID=39 00:04:23.428 ++ VERSION_CODENAME= 00:04:23.428 ++ PLATFORM_ID=platform:f39 00:04:23.428 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:23.428 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:23.428 ++ LOGO=fedora-logo-icon 00:04:23.428 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:23.428 ++ HOME_URL=https://fedoraproject.org/ 00:04:23.429 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:23.429 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:23.429 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:23.429 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:23.429 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:23.429 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:23.429 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:23.429 ++ SUPPORT_END=2024-11-12 00:04:23.429 ++ VARIANT='Cloud Edition' 00:04:23.429 ++ VARIANT_ID=cloud 00:04:23.429 + uname -a 00:04:23.429 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:23.429 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:26.734 Hugepages 00:04:26.734 node hugesize free / total 00:04:26.734 node0 1048576kB 0 / 0 00:04:26.734 node0 2048kB 0 / 0 00:04:26.734 node1 1048576kB 0 / 0 00:04:26.734 node1 2048kB 0 / 0 00:04:26.734 00:04:26.734 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:26.734 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:26.734 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:26.734 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:26.734 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:26.734 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:26.734 + rm -f /tmp/spdk-ld-path 00:04:26.734 + source autorun-spdk.conf 00:04:26.734 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:26.734 ++ SPDK_TEST_NVMF=1 00:04:26.734 ++ SPDK_TEST_NVME_CLI=1 00:04:26.734 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:26.734 ++ SPDK_TEST_NVMF_NICS=e810 00:04:26.734 ++ SPDK_TEST_VFIOUSER=1 00:04:26.735 ++ SPDK_RUN_UBSAN=1 00:04:26.735 ++ NET_TYPE=phy 00:04:26.735 ++ RUN_NIGHTLY=0 00:04:26.735 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:26.735 + [[ -n '' ]] 00:04:26.735 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:26.735 + for M in /var/spdk/build-*-manifest.txt 00:04:26.735 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:26.735 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:26.735 + for M in /var/spdk/build-*-manifest.txt 00:04:26.735 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:26.735 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:26.735 + for M in /var/spdk/build-*-manifest.txt 00:04:26.735 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:26.735 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:26.735 ++ uname 00:04:26.735 + [[ Linux == \L\i\n\u\x ]] 00:04:26.735 + sudo dmesg -T 00:04:26.735 + sudo dmesg --clear 00:04:26.735 + dmesg_pid=596472 00:04:26.735 + [[ Fedora Linux == FreeBSD ]] 00:04:26.735 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:26.735 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:26.735 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:26.735 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:04:26.735 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:04:26.735 + [[ -x /usr/src/fio-static/fio ]] 00:04:26.735 + export FIO_BIN=/usr/src/fio-static/fio 00:04:26.735 + FIO_BIN=/usr/src/fio-static/fio 00:04:26.735 + sudo dmesg -Tw 00:04:26.735 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:26.735 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:26.735 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:26.735 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:26.735 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:26.735 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:26.735 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:26.735 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:26.735 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:26.997 12:48:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:26.997 12:48:29 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:26.997 12:48:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:26.997 12:48:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:26.997 12:48:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:04:26.997 12:48:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:26.997 12:48:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:04:26.997 12:48:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:04:26.997 12:48:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:04:26.997 12:48:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:04:26.997 12:48:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:26.997 12:48:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:26.997 12:48:29 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:26.997 12:48:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:26.997 12:48:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:26.997 12:48:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:26.997 12:48:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:26.997 12:48:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.997 12:48:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.997 12:48:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.997 12:48:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.998 12:48:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.998 12:48:29 -- paths/export.sh@5 -- $ export PATH 00:04:26.998 12:48:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.998 12:48:29 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:26.998 12:48:29 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:26.998 12:48:29 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732880909.XXXXXX 00:04:26.998 12:48:29 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732880909.1tXhBA 00:04:26.998 12:48:29 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:26.998 12:48:29 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:26.998 12:48:29 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:26.998 12:48:29 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:26.998 12:48:29 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:26.998 12:48:29 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:26.998 12:48:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:26.998 12:48:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:26.998 12:48:29 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:26.998 12:48:29 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:26.998 12:48:29 -- pm/common@17 -- $ local monitor 00:04:26.998 12:48:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.998 12:48:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.998 12:48:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.998 12:48:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.998 12:48:29 -- pm/common@21 -- $ date +%s 00:04:26.998 12:48:29 -- pm/common@21 -- $ date +%s 00:04:26.998 12:48:29 -- pm/common@25 -- $ sleep 1 00:04:26.998 12:48:29 -- pm/common@21 -- $ date +%s 00:04:26.998 12:48:29 -- pm/common@21 -- $ date +%s 00:04:26.998 12:48:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732880909 00:04:26.998 12:48:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732880909 00:04:26.998 12:48:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732880909 00:04:26.998 12:48:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732880909 00:04:26.998 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732880909_collect-vmstat.pm.log 00:04:26.998 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732880909_collect-cpu-load.pm.log 00:04:26.998 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732880909_collect-cpu-temp.pm.log 00:04:26.998 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732880909_collect-bmc-pm.bmc.pm.log 00:04:27.943 12:48:30 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:27.943 12:48:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:27.943 12:48:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:27.943 12:48:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:27.943 12:48:30 -- spdk/autobuild.sh@16 -- $ date -u 00:04:27.943 Fri Nov 29 11:48:30 AM UTC 2024 00:04:27.943 12:48:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:27.943 v25.01-pre-279-gda516d862 00:04:27.943 12:48:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:27.943 12:48:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:27.943 12:48:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:27.943 12:48:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:27.943 12:48:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:27.943 12:48:30 -- common/autotest_common.sh@10 -- $ set +x 00:04:28.206 ************************************ 00:04:28.206 START TEST ubsan 00:04:28.206 ************************************ 00:04:28.206 12:48:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:28.206 using ubsan 00:04:28.206 00:04:28.206 real 0m0.001s 00:04:28.206 user 0m0.000s 00:04:28.206 sys 0m0.001s 00:04:28.206 12:48:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:28.206 12:48:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:28.206 ************************************ 00:04:28.206 END TEST ubsan 00:04:28.206 ************************************ 00:04:28.206 12:48:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:28.206 12:48:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:28.206 12:48:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:28.206 12:48:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:28.206 12:48:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:28.206 12:48:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:28.206 12:48:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:28.206 12:48:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:28.206 12:48:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:28.206 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:28.206 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:28.780 Using 'verbs' RDMA provider 00:04:44.645 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:56.892 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:57.463 Creating mk/config.mk...done. 00:04:57.463 Creating mk/cc.flags.mk...done. 00:04:57.463 Type 'make' to build. 00:04:57.463 12:48:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:04:57.463 12:48:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:57.463 12:48:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:57.463 12:48:59 -- common/autotest_common.sh@10 -- $ set +x 00:04:57.463 ************************************ 00:04:57.463 START TEST make 00:04:57.463 ************************************ 00:04:57.463 12:48:59 make -- common/autotest_common.sh@1129 -- $ make -j144 00:04:58.035 make[1]: Nothing to be done for 'all'. 00:04:59.425 The Meson build system 00:04:59.425 Version: 1.5.0 00:04:59.425 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:59.425 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:59.425 Build type: native build 00:04:59.425 Project name: libvfio-user 00:04:59.425 Project version: 0.0.1 00:04:59.425 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:59.425 C linker for the host machine: cc ld.bfd 2.40-14 00:04:59.425 Host machine cpu family: x86_64 00:04:59.425 Host machine cpu: x86_64 00:04:59.425 Run-time dependency threads found: YES 00:04:59.425 Library dl found: YES 00:04:59.425 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:59.425 Run-time dependency json-c found: YES 0.17 00:04:59.425 Run-time dependency cmocka found: YES 1.1.7 00:04:59.425 Program pytest-3 found: NO 00:04:59.425 Program flake8 found: NO 00:04:59.425 Program misspell-fixer found: NO 00:04:59.425 Program restructuredtext-lint found: NO 00:04:59.425 Program valgrind found: YES (/usr/bin/valgrind) 00:04:59.425 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:59.426 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:59.426 Compiler for C supports arguments -Wwrite-strings: YES 00:04:59.426 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:59.426 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:59.426 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:59.426 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:59.426 Build targets in project: 8 00:04:59.426 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:59.426 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:59.426 00:04:59.426 libvfio-user 0.0.1 00:04:59.426 00:04:59.426 User defined options 00:04:59.426 buildtype : debug 00:04:59.426 default_library: shared 00:04:59.426 libdir : /usr/local/lib 00:04:59.426 00:04:59.426 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:59.685 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:59.947 [1/37] Compiling C object samples/null.p/null.c.o 00:04:59.947 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:59.947 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:59.947 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:59.947 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:59.947 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:59.947 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:59.947 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:59.947 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:59.947 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:59.947 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:59.947 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:59.947 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:59.947 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:59.947 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:59.947 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:59.947 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:59.947 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:59.947 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:59.947 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:59.947 [21/37] Compiling C object samples/server.p/server.c.o 00:04:59.947 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:59.947 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:59.947 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:59.947 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:59.947 [26/37] Compiling C object samples/client.p/client.c.o 00:04:59.947 [27/37] Linking target samples/client 00:04:59.947 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:59.947 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:00.208 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:05:00.208 [31/37] Linking target test/unit_tests 00:05:00.208 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:00.208 [33/37] Linking target samples/lspci 00:05:00.208 [34/37] Linking target samples/server 00:05:00.208 [35/37] Linking target samples/null 00:05:00.208 [36/37] Linking target samples/gpio-pci-idio-16 00:05:00.208 [37/37] Linking target samples/shadow_ioeventfd_server 00:05:00.208 INFO: autodetecting backend as ninja 00:05:00.208 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:00.469 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:00.731 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:00.731 ninja: no work to do. 00:05:07.321 The Meson build system 00:05:07.321 Version: 1.5.0 00:05:07.321 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:07.321 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:07.321 Build type: native build 00:05:07.321 Program cat found: YES (/usr/bin/cat) 00:05:07.321 Project name: DPDK 00:05:07.321 Project version: 24.03.0 00:05:07.321 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:07.321 C linker for the host machine: cc ld.bfd 2.40-14 00:05:07.321 Host machine cpu family: x86_64 00:05:07.321 Host machine cpu: x86_64 00:05:07.321 Message: ## Building in Developer Mode ## 00:05:07.321 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:07.321 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:07.321 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:07.321 Program python3 found: YES (/usr/bin/python3) 00:05:07.321 Program cat found: YES (/usr/bin/cat) 00:05:07.321 Compiler for C supports arguments -march=native: YES 00:05:07.321 Checking for size of "void *" : 8 00:05:07.321 Checking for size of "void *" : 8 (cached) 00:05:07.321 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:07.321 Library m found: YES 00:05:07.321 Library numa found: YES 00:05:07.321 Has header "numaif.h" : YES 00:05:07.321 Library fdt found: NO 00:05:07.321 Library execinfo found: NO 00:05:07.321 Has header "execinfo.h" : YES 00:05:07.321 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:07.321 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:07.321 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:07.321 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:07.321 Run-time dependency openssl found: YES 3.1.1 00:05:07.321 Run-time dependency libpcap found: YES 1.10.4 00:05:07.321 Has header "pcap.h" with dependency libpcap: YES 00:05:07.321 Compiler for C supports arguments -Wcast-qual: YES 00:05:07.321 Compiler for C supports arguments -Wdeprecated: YES 00:05:07.321 Compiler for C supports arguments -Wformat: YES 00:05:07.321 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:07.321 Compiler for C supports arguments -Wformat-security: NO 00:05:07.321 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:07.321 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:07.321 Compiler for C supports arguments -Wnested-externs: YES 00:05:07.321 Compiler for C supports arguments -Wold-style-definition: YES 00:05:07.321 Compiler for C supports arguments -Wpointer-arith: YES 00:05:07.321 Compiler for C supports arguments -Wsign-compare: YES 00:05:07.321 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:07.321 Compiler for C supports arguments -Wundef: YES 00:05:07.321 Compiler for C supports arguments -Wwrite-strings: YES 00:05:07.321 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:07.321 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:07.321 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:07.321 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:07.321 Program objdump found: YES (/usr/bin/objdump) 00:05:07.321 Compiler for C supports arguments -mavx512f: YES 00:05:07.321 Checking if "AVX512 checking" compiles: YES 00:05:07.321 Fetching value of define "__SSE4_2__" : 1 00:05:07.321 Fetching value of define "__AES__" : 1 00:05:07.321 Fetching value of define "__AVX__" : 1 00:05:07.321 Fetching value of define "__AVX2__" : 1 00:05:07.321 Fetching value of define "__AVX512BW__" : 1 00:05:07.321 Fetching value of define "__AVX512CD__" : 1 00:05:07.321 Fetching value of define "__AVX512DQ__" : 1 00:05:07.321 Fetching value of define "__AVX512F__" : 1 00:05:07.321 Fetching value of define "__AVX512VL__" : 1 00:05:07.321 Fetching value of define "__PCLMUL__" : 1 00:05:07.321 Fetching value of define "__RDRND__" : 1 00:05:07.321 Fetching value of define "__RDSEED__" : 1 00:05:07.321 Fetching value of define "__VPCLMULQDQ__" : 1 00:05:07.321 Fetching value of define "__znver1__" : (undefined) 00:05:07.321 Fetching value of define "__znver2__" : (undefined) 00:05:07.321 Fetching value of define "__znver3__" : (undefined) 00:05:07.321 Fetching value of define "__znver4__" : (undefined) 00:05:07.321 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:07.321 Message: lib/log: Defining dependency "log" 00:05:07.321 Message: lib/kvargs: Defining dependency "kvargs" 00:05:07.321 Message: lib/telemetry: Defining dependency "telemetry" 00:05:07.321 Checking for function "getentropy" : NO 00:05:07.321 Message: lib/eal: Defining dependency "eal" 00:05:07.321 Message: lib/ring: Defining dependency "ring" 00:05:07.321 Message: lib/rcu: Defining dependency "rcu" 00:05:07.321 Message: lib/mempool: Defining dependency "mempool" 00:05:07.321 Message: lib/mbuf: Defining dependency "mbuf" 00:05:07.321 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:07.321 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:07.321 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:07.321 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:07.321 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:07.321 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:05:07.321 Compiler for C supports arguments -mpclmul: YES 00:05:07.321 Compiler for C supports arguments -maes: YES 00:05:07.321 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:07.321 Compiler for C supports arguments -mavx512bw: YES 00:05:07.321 Compiler for C supports arguments -mavx512dq: YES 00:05:07.321 Compiler for C supports arguments -mavx512vl: YES 00:05:07.321 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:07.321 Compiler for C supports arguments -mavx2: YES 00:05:07.321 Compiler for C supports arguments -mavx: YES 00:05:07.321 Message: lib/net: Defining dependency "net" 00:05:07.321 Message: lib/meter: Defining dependency "meter" 00:05:07.321 Message: lib/ethdev: Defining dependency "ethdev" 00:05:07.321 Message: lib/pci: Defining dependency "pci" 00:05:07.321 Message: lib/cmdline: Defining dependency "cmdline" 00:05:07.321 Message: lib/hash: Defining dependency "hash" 00:05:07.321 Message: lib/timer: Defining dependency "timer" 00:05:07.321 Message: lib/compressdev: Defining dependency "compressdev" 00:05:07.321 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:07.321 Message: lib/dmadev: Defining dependency "dmadev" 00:05:07.321 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:07.321 Message: lib/power: Defining dependency "power" 00:05:07.321 Message: lib/reorder: Defining dependency "reorder" 00:05:07.321 Message: lib/security: Defining dependency "security" 00:05:07.321 Has header "linux/userfaultfd.h" : YES 00:05:07.321 Has header "linux/vduse.h" : YES 00:05:07.321 Message: lib/vhost: Defining dependency "vhost" 00:05:07.321 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:07.321 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:07.321 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:07.321 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:07.321 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:07.321 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:07.321 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:07.321 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:07.321 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:07.321 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:07.321 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:07.321 Configuring doxy-api-html.conf using configuration 00:05:07.321 Configuring doxy-api-man.conf using configuration 00:05:07.321 Program mandb found: YES (/usr/bin/mandb) 00:05:07.321 Program sphinx-build found: NO 00:05:07.321 Configuring rte_build_config.h using configuration 00:05:07.321 Message: 00:05:07.321 ================= 00:05:07.321 Applications Enabled 00:05:07.321 ================= 00:05:07.321 00:05:07.321 apps: 00:05:07.321 00:05:07.321 00:05:07.321 Message: 00:05:07.321 ================= 00:05:07.321 Libraries Enabled 00:05:07.321 ================= 00:05:07.321 00:05:07.321 libs: 00:05:07.321 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:07.321 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:07.321 cryptodev, dmadev, power, reorder, security, vhost, 00:05:07.321 00:05:07.321 Message: 00:05:07.321 =============== 00:05:07.321 Drivers Enabled 00:05:07.321 =============== 00:05:07.321 00:05:07.321 common: 00:05:07.321 00:05:07.321 bus: 00:05:07.321 pci, vdev, 00:05:07.321 mempool: 00:05:07.321 ring, 00:05:07.321 dma: 00:05:07.321 00:05:07.321 net: 00:05:07.321 00:05:07.321 crypto: 00:05:07.321 00:05:07.321 compress: 00:05:07.321 00:05:07.322 vdpa: 00:05:07.322 00:05:07.322 00:05:07.322 Message: 00:05:07.322 ================= 00:05:07.322 Content Skipped 00:05:07.322 ================= 00:05:07.322 00:05:07.322 apps: 00:05:07.322 dumpcap: explicitly disabled via build config 00:05:07.322 graph: explicitly disabled via build config 00:05:07.322 pdump: explicitly disabled via build config 00:05:07.322 proc-info: explicitly disabled via build config 00:05:07.322 test-acl: explicitly disabled via build config 00:05:07.322 test-bbdev: explicitly disabled via build config 00:05:07.322 test-cmdline: explicitly disabled via build config 00:05:07.322 test-compress-perf: explicitly disabled via build config 00:05:07.322 test-crypto-perf: explicitly disabled via build config 00:05:07.322 test-dma-perf: explicitly disabled via build config 00:05:07.322 test-eventdev: explicitly disabled via build config 00:05:07.322 test-fib: explicitly disabled via build config 00:05:07.322 test-flow-perf: explicitly disabled via build config 00:05:07.322 test-gpudev: explicitly disabled via build config 00:05:07.322 test-mldev: explicitly disabled via build config 00:05:07.322 test-pipeline: explicitly disabled via build config 00:05:07.322 test-pmd: explicitly disabled via build config 00:05:07.322 test-regex: explicitly disabled via build config 00:05:07.322 test-sad: explicitly disabled via build config 00:05:07.322 test-security-perf: explicitly disabled via build config 00:05:07.322 00:05:07.322 libs: 00:05:07.322 argparse: explicitly disabled via build config 00:05:07.322 metrics: explicitly disabled via build config 00:05:07.322 acl: explicitly disabled via build config 00:05:07.322 bbdev: explicitly disabled via build config 00:05:07.322 bitratestats: explicitly disabled via build config 00:05:07.322 bpf: explicitly disabled via build config 00:05:07.322 cfgfile: explicitly disabled via build config 00:05:07.322 distributor: explicitly disabled via build config 00:05:07.322 efd: explicitly disabled via build config 00:05:07.322 eventdev: explicitly disabled via build config 00:05:07.322 dispatcher: explicitly disabled via build config 00:05:07.322 gpudev: explicitly disabled via build config 00:05:07.322 gro: explicitly disabled via build config 00:05:07.322 gso: explicitly disabled via build config 00:05:07.322 ip_frag: explicitly disabled via build config 00:05:07.322 jobstats: explicitly disabled via build config 00:05:07.322 latencystats: explicitly disabled via build config 00:05:07.322 lpm: explicitly disabled via build config 00:05:07.322 member: explicitly disabled via build config 00:05:07.322 pcapng: explicitly disabled via build config 00:05:07.322 rawdev: explicitly disabled via build config 00:05:07.322 regexdev: explicitly disabled via build config 00:05:07.322 mldev: explicitly disabled via build config 00:05:07.322 rib: explicitly disabled via build config 00:05:07.322 sched: explicitly disabled via build config 00:05:07.322 stack: explicitly disabled via build config 00:05:07.322 ipsec: explicitly disabled via build config 00:05:07.322 pdcp: explicitly disabled via build config 00:05:07.322 fib: explicitly disabled via build config 00:05:07.322 port: explicitly disabled via build config 00:05:07.322 pdump: explicitly disabled via build config 00:05:07.322 table: explicitly disabled via build config 00:05:07.322 pipeline: explicitly disabled via build config 00:05:07.322 graph: explicitly disabled via build config 00:05:07.322 node: explicitly disabled via build config 00:05:07.322 00:05:07.322 drivers: 00:05:07.322 common/cpt: not in enabled drivers build config 00:05:07.322 common/dpaax: not in enabled drivers build config 00:05:07.322 common/iavf: not in enabled drivers build config 00:05:07.322 common/idpf: not in enabled drivers build config 00:05:07.322 common/ionic: not in enabled drivers build config 00:05:07.322 common/mvep: not in enabled drivers build config 00:05:07.322 common/octeontx: not in enabled drivers build config 00:05:07.322 bus/auxiliary: not in enabled drivers build config 00:05:07.322 bus/cdx: not in enabled drivers build config 00:05:07.322 bus/dpaa: not in enabled drivers build config 00:05:07.322 bus/fslmc: not in enabled drivers build config 00:05:07.322 bus/ifpga: not in enabled drivers build config 00:05:07.322 bus/platform: not in enabled drivers build config 00:05:07.322 bus/uacce: not in enabled drivers build config 00:05:07.322 bus/vmbus: not in enabled drivers build config 00:05:07.322 common/cnxk: not in enabled drivers build config 00:05:07.322 common/mlx5: not in enabled drivers build config 00:05:07.322 common/nfp: not in enabled drivers build config 00:05:07.322 common/nitrox: not in enabled drivers build config 00:05:07.322 common/qat: not in enabled drivers build config 00:05:07.322 common/sfc_efx: not in enabled drivers build config 00:05:07.322 mempool/bucket: not in enabled drivers build config 00:05:07.322 mempool/cnxk: not in enabled drivers build config 00:05:07.322 mempool/dpaa: not in enabled drivers build config 00:05:07.322 mempool/dpaa2: not in enabled drivers build config 00:05:07.322 mempool/octeontx: not in enabled drivers build config 00:05:07.322 mempool/stack: not in enabled drivers build config 00:05:07.322 dma/cnxk: not in enabled drivers build config 00:05:07.322 dma/dpaa: not in enabled drivers build config 00:05:07.322 dma/dpaa2: not in enabled drivers build config 00:05:07.322 dma/hisilicon: not in enabled drivers build config 00:05:07.322 dma/idxd: not in enabled drivers build config 00:05:07.322 dma/ioat: not in enabled drivers build config 00:05:07.322 dma/skeleton: not in enabled drivers build config 00:05:07.322 net/af_packet: not in enabled drivers build config 00:05:07.322 net/af_xdp: not in enabled drivers build config 00:05:07.322 net/ark: not in enabled drivers build config 00:05:07.322 net/atlantic: not in enabled drivers build config 00:05:07.322 net/avp: not in enabled drivers build config 00:05:07.322 net/axgbe: not in enabled drivers build config 00:05:07.322 net/bnx2x: not in enabled drivers build config 00:05:07.322 net/bnxt: not in enabled drivers build config 00:05:07.322 net/bonding: not in enabled drivers build config 00:05:07.322 net/cnxk: not in enabled drivers build config 00:05:07.322 net/cpfl: not in enabled drivers build config 00:05:07.322 net/cxgbe: not in enabled drivers build config 00:05:07.322 net/dpaa: not in enabled drivers build config 00:05:07.322 net/dpaa2: not in enabled drivers build config 00:05:07.322 net/e1000: not in enabled drivers build config 00:05:07.322 net/ena: not in enabled drivers build config 00:05:07.322 net/enetc: not in enabled drivers build config 00:05:07.322 net/enetfec: not in enabled drivers build config 00:05:07.322 net/enic: not in enabled drivers build config 00:05:07.322 net/failsafe: not in enabled drivers build config 00:05:07.322 net/fm10k: not in enabled drivers build config 00:05:07.322 net/gve: not in enabled drivers build config 00:05:07.322 net/hinic: not in enabled drivers build config 00:05:07.322 net/hns3: not in enabled drivers build config 00:05:07.322 net/i40e: not in enabled drivers build config 00:05:07.322 net/iavf: not in enabled drivers build config 00:05:07.322 net/ice: not in enabled drivers build config 00:05:07.322 net/idpf: not in enabled drivers build config 00:05:07.322 net/igc: not in enabled drivers build config 00:05:07.322 net/ionic: not in enabled drivers build config 00:05:07.322 net/ipn3ke: not in enabled drivers build config 00:05:07.322 net/ixgbe: not in enabled drivers build config 00:05:07.322 net/mana: not in enabled drivers build config 00:05:07.322 net/memif: not in enabled drivers build config 00:05:07.322 net/mlx4: not in enabled drivers build config 00:05:07.322 net/mlx5: not in enabled drivers build config 00:05:07.322 net/mvneta: not in enabled drivers build config 00:05:07.322 net/mvpp2: not in enabled drivers build config 00:05:07.322 net/netvsc: not in enabled drivers build config 00:05:07.322 net/nfb: not in enabled drivers build config 00:05:07.322 net/nfp: not in enabled drivers build config 00:05:07.322 net/ngbe: not in enabled drivers build config 00:05:07.322 net/null: not in enabled drivers build config 00:05:07.322 net/octeontx: not in enabled drivers build config 00:05:07.322 net/octeon_ep: not in enabled drivers build config 00:05:07.322 net/pcap: not in enabled drivers build config 00:05:07.322 net/pfe: not in enabled drivers build config 00:05:07.322 net/qede: not in enabled drivers build config 00:05:07.322 net/ring: not in enabled drivers build config 00:05:07.322 net/sfc: not in enabled drivers build config 00:05:07.322 net/softnic: not in enabled drivers build config 00:05:07.322 net/tap: not in enabled drivers build config 00:05:07.322 net/thunderx: not in enabled drivers build config 00:05:07.322 net/txgbe: not in enabled drivers build config 00:05:07.322 net/vdev_netvsc: not in enabled drivers build config 00:05:07.322 net/vhost: not in enabled drivers build config 00:05:07.322 net/virtio: not in enabled drivers build config 00:05:07.322 net/vmxnet3: not in enabled drivers build config 00:05:07.322 raw/*: missing internal dependency, "rawdev" 00:05:07.322 crypto/armv8: not in enabled drivers build config 00:05:07.322 crypto/bcmfs: not in enabled drivers build config 00:05:07.322 crypto/caam_jr: not in enabled drivers build config 00:05:07.322 crypto/ccp: not in enabled drivers build config 00:05:07.322 crypto/cnxk: not in enabled drivers build config 00:05:07.322 crypto/dpaa_sec: not in enabled drivers build config 00:05:07.322 crypto/dpaa2_sec: not in enabled drivers build config 00:05:07.322 crypto/ipsec_mb: not in enabled drivers build config 00:05:07.322 crypto/mlx5: not in enabled drivers build config 00:05:07.322 crypto/mvsam: not in enabled drivers build config 00:05:07.322 crypto/nitrox: not in enabled drivers build config 00:05:07.322 crypto/null: not in enabled drivers build config 00:05:07.322 crypto/octeontx: not in enabled drivers build config 00:05:07.322 crypto/openssl: not in enabled drivers build config 00:05:07.322 crypto/scheduler: not in enabled drivers build config 00:05:07.322 crypto/uadk: not in enabled drivers build config 00:05:07.322 crypto/virtio: not in enabled drivers build config 00:05:07.322 compress/isal: not in enabled drivers build config 00:05:07.322 compress/mlx5: not in enabled drivers build config 00:05:07.322 compress/nitrox: not in enabled drivers build config 00:05:07.322 compress/octeontx: not in enabled drivers build config 00:05:07.322 compress/zlib: not in enabled drivers build config 00:05:07.322 regex/*: missing internal dependency, "regexdev" 00:05:07.322 ml/*: missing internal dependency, "mldev" 00:05:07.322 vdpa/ifc: not in enabled drivers build config 00:05:07.322 vdpa/mlx5: not in enabled drivers build config 00:05:07.323 vdpa/nfp: not in enabled drivers build config 00:05:07.323 vdpa/sfc: not in enabled drivers build config 00:05:07.323 event/*: missing internal dependency, "eventdev" 00:05:07.323 baseband/*: missing internal dependency, "bbdev" 00:05:07.323 gpu/*: missing internal dependency, "gpudev" 00:05:07.323 00:05:07.323 00:05:07.323 Build targets in project: 84 00:05:07.323 00:05:07.323 DPDK 24.03.0 00:05:07.323 00:05:07.323 User defined options 00:05:07.323 buildtype : debug 00:05:07.323 default_library : shared 00:05:07.323 libdir : lib 00:05:07.323 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:07.323 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:07.323 c_link_args : 00:05:07.323 cpu_instruction_set: native 00:05:07.323 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:05:07.323 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:05:07.323 enable_docs : false 00:05:07.323 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:07.323 enable_kmods : false 00:05:07.323 max_lcores : 128 00:05:07.323 tests : false 00:05:07.323 00:05:07.323 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:07.323 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:07.323 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:07.323 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:07.323 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:07.323 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:07.323 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:07.323 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:07.323 [7/267] Linking static target lib/librte_kvargs.a 00:05:07.323 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:07.323 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:07.323 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:07.323 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:07.323 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:07.323 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:07.323 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:07.323 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:07.323 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:07.323 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:07.323 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:07.323 [19/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:07.323 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:07.323 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:07.323 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:07.323 [23/267] Linking static target lib/librte_log.a 00:05:07.323 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:07.323 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:07.323 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:07.323 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:07.323 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:07.323 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:07.323 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:07.323 [31/267] Linking static target lib/librte_pci.a 00:05:07.323 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:07.323 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:07.323 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:07.588 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:07.588 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:07.588 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:07.588 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:07.588 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.588 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:07.588 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:07.588 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:07.588 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:07.588 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:07.588 [45/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:07.589 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:07.589 [47/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.589 [48/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:07.589 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:07.589 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:07.850 [51/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:07.850 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:07.850 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:07.850 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:07.850 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:07.850 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:07.850 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:07.850 [58/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:07.850 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:07.850 [60/267] Linking static target lib/librte_meter.a 00:05:07.850 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:07.850 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:07.850 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:07.850 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:07.850 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:07.850 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:07.850 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:07.850 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:07.850 [69/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:07.850 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:07.850 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:07.850 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:07.850 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:07.850 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:07.850 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:07.850 [76/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:07.850 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:07.850 [78/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:07.850 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:07.850 [80/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:07.850 [81/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:07.850 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:07.850 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:07.850 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:07.850 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:07.850 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:07.850 [87/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:07.850 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:07.850 [89/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:07.850 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:07.850 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:07.850 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:07.850 [93/267] Linking static target lib/librte_telemetry.a 00:05:07.850 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:07.850 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:07.851 [96/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:07.851 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:07.851 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:07.851 [99/267] Linking static target lib/librte_timer.a 00:05:07.851 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:07.851 [101/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:07.851 [102/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:07.851 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:07.851 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:07.851 [105/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:07.851 [106/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:07.851 [107/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:07.851 [108/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:07.851 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:05:07.851 [110/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:07.851 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:07.851 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:07.851 [113/267] Linking static target lib/librte_cmdline.a 00:05:07.851 [114/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:07.851 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:07.851 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:07.851 [117/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:07.851 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:07.851 [119/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:07.851 [120/267] Linking static target lib/librte_ring.a 00:05:07.851 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:07.851 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:07.851 [123/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:07.851 [124/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:07.851 [125/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:07.851 [126/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:07.851 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:07.851 [128/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:07.851 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:07.851 [130/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:07.851 [131/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:07.851 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:07.851 [133/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:07.851 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:07.851 [135/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:07.851 [136/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:07.851 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:07.851 [138/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:07.851 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:07.851 [140/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:07.851 [141/267] Linking static target lib/librte_mempool.a 00:05:07.851 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:07.851 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:07.851 [144/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:07.851 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:07.851 [146/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:07.851 [147/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:07.851 [148/267] Linking static target lib/librte_net.a 00:05:07.851 [149/267] Linking static target lib/librte_dmadev.a 00:05:07.851 [150/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:07.851 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:07.851 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:07.851 [153/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:07.851 [154/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:07.851 [155/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:07.851 [156/267] Linking static target lib/librte_power.a 00:05:07.851 [157/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.851 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:07.851 [159/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:07.851 [160/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:07.851 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:07.851 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:07.851 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:07.851 [164/267] Linking static target lib/librte_reorder.a 00:05:07.851 [165/267] Linking static target lib/librte_eal.a 00:05:08.112 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:08.113 [167/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:08.113 [168/267] Linking target lib/librte_log.so.24.1 00:05:08.113 [169/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:08.113 [170/267] Linking static target lib/librte_rcu.a 00:05:08.113 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:08.113 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:08.113 [173/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:08.113 [174/267] Linking static target lib/librte_compressdev.a 00:05:08.113 [175/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.113 [176/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:08.113 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:08.113 [178/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:08.113 [179/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:08.113 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:08.113 [181/267] Linking static target lib/librte_security.a 00:05:08.113 [182/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:08.113 [183/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:08.113 [184/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:08.113 [185/267] Linking static target drivers/librte_bus_vdev.a 00:05:08.113 [186/267] Linking static target lib/librte_hash.a 00:05:08.113 [187/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:08.113 [188/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:08.113 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:08.113 [190/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:08.113 [191/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:08.113 [192/267] Linking static target lib/librte_mbuf.a 00:05:08.113 [193/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:08.113 [194/267] Linking target lib/librte_kvargs.so.24.1 00:05:08.113 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:08.113 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:08.113 [197/267] Linking static target drivers/librte_bus_pci.a 00:05:08.113 [198/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.374 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:08.374 [200/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:08.374 [201/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:08.374 [202/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.374 [203/267] Linking static target drivers/librte_mempool_ring.a 00:05:08.374 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.374 [205/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:08.374 [206/267] Linking static target lib/librte_cryptodev.a 00:05:08.374 [207/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:08.374 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:08.374 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.374 [210/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.374 [211/267] Linking target lib/librte_telemetry.so.24.1 00:05:08.374 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.374 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.635 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:08.635 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.897 [216/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:08.897 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.897 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:08.897 [219/267] Linking static target lib/librte_ethdev.a 00:05:08.897 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.897 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.897 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.158 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.158 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.158 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.158 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.104 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:10.104 [228/267] Linking static target lib/librte_vhost.a 00:05:10.673 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.054 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.638 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.578 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.578 [233/267] Linking target lib/librte_eal.so.24.1 00:05:19.839 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:19.839 [235/267] Linking target lib/librte_ring.so.24.1 00:05:19.839 [236/267] Linking target lib/librte_pci.so.24.1 00:05:19.839 [237/267] Linking target lib/librte_meter.so.24.1 00:05:19.839 [238/267] Linking target lib/librte_timer.so.24.1 00:05:19.839 [239/267] Linking target lib/librte_dmadev.so.24.1 00:05:19.839 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:05:19.839 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:19.839 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:19.839 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:19.839 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:19.839 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:19.839 [246/267] Linking target lib/librte_rcu.so.24.1 00:05:19.839 [247/267] Linking target lib/librte_mempool.so.24.1 00:05:19.839 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:05:20.100 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:20.100 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:20.100 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:05:20.100 [252/267] Linking target lib/librte_mbuf.so.24.1 00:05:20.360 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:20.360 [254/267] Linking target lib/librte_compressdev.so.24.1 00:05:20.360 [255/267] Linking target lib/librte_reorder.so.24.1 00:05:20.360 [256/267] Linking target lib/librte_net.so.24.1 00:05:20.360 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:05:20.360 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:20.360 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:20.360 [260/267] Linking target lib/librte_ethdev.so.24.1 00:05:20.360 [261/267] Linking target lib/librte_cmdline.so.24.1 00:05:20.360 [262/267] Linking target lib/librte_hash.so.24.1 00:05:20.360 [263/267] Linking target lib/librte_security.so.24.1 00:05:20.620 [264/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:20.620 [265/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:20.620 [266/267] Linking target lib/librte_power.so.24.1 00:05:20.620 [267/267] Linking target lib/librte_vhost.so.24.1 00:05:20.620 INFO: autodetecting backend as ninja 00:05:20.620 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:05:23.165 CC lib/ut/ut.o 00:05:23.165 CC lib/ut_mock/mock.o 00:05:23.165 CC lib/log/log.o 00:05:23.165 CC lib/log/log_flags.o 00:05:23.165 CC lib/log/log_deprecated.o 00:05:23.165 LIB libspdk_ut.a 00:05:23.165 SO libspdk_ut.so.2.0 00:05:23.425 LIB libspdk_ut_mock.a 00:05:23.425 LIB libspdk_log.a 00:05:23.425 SYMLINK libspdk_ut.so 00:05:23.425 SO libspdk_ut_mock.so.6.0 00:05:23.425 SO libspdk_log.so.7.1 00:05:23.425 SYMLINK libspdk_ut_mock.so 00:05:23.425 SYMLINK libspdk_log.so 00:05:23.686 CC lib/dma/dma.o 00:05:23.686 CC lib/util/base64.o 00:05:23.686 CC lib/util/bit_array.o 00:05:23.686 CC lib/util/cpuset.o 00:05:23.686 CC lib/ioat/ioat.o 00:05:23.686 CC lib/util/crc16.o 00:05:23.686 CXX lib/trace_parser/trace.o 00:05:23.686 CC lib/util/crc32.o 00:05:23.686 CC lib/util/crc32c.o 00:05:23.686 CC lib/util/crc32_ieee.o 00:05:23.686 CC lib/util/crc64.o 00:05:23.686 CC lib/util/dif.o 00:05:23.686 CC lib/util/fd.o 00:05:23.686 CC lib/util/fd_group.o 00:05:23.686 CC lib/util/file.o 00:05:23.686 CC lib/util/hexlify.o 00:05:23.686 CC lib/util/iov.o 00:05:23.686 CC lib/util/math.o 00:05:23.686 CC lib/util/net.o 00:05:23.686 CC lib/util/pipe.o 00:05:23.686 CC lib/util/strerror_tls.o 00:05:23.686 CC lib/util/string.o 00:05:23.686 CC lib/util/uuid.o 00:05:23.686 CC lib/util/xor.o 00:05:23.686 CC lib/util/zipf.o 00:05:23.686 CC lib/util/md5.o 00:05:23.946 CC lib/vfio_user/host/vfio_user_pci.o 00:05:23.946 CC lib/vfio_user/host/vfio_user.o 00:05:23.946 LIB libspdk_dma.a 00:05:23.946 SO libspdk_dma.so.5.0 00:05:23.946 LIB libspdk_ioat.a 00:05:24.206 SYMLINK libspdk_dma.so 00:05:24.206 SO libspdk_ioat.so.7.0 00:05:24.206 SYMLINK libspdk_ioat.so 00:05:24.206 LIB libspdk_vfio_user.a 00:05:24.206 SO libspdk_vfio_user.so.5.0 00:05:24.206 SYMLINK libspdk_vfio_user.so 00:05:24.467 LIB libspdk_util.a 00:05:24.467 SO libspdk_util.so.10.1 00:05:24.467 SYMLINK libspdk_util.so 00:05:24.729 LIB libspdk_trace_parser.a 00:05:24.729 SO libspdk_trace_parser.so.6.0 00:05:24.729 SYMLINK libspdk_trace_parser.so 00:05:24.989 CC lib/json/json_parse.o 00:05:24.989 CC lib/json/json_util.o 00:05:24.989 CC lib/json/json_write.o 00:05:24.989 CC lib/rdma_utils/rdma_utils.o 00:05:24.989 CC lib/env_dpdk/env.o 00:05:24.989 CC lib/env_dpdk/memory.o 00:05:24.989 CC lib/vmd/vmd.o 00:05:24.989 CC lib/conf/conf.o 00:05:24.989 CC lib/env_dpdk/pci.o 00:05:24.989 CC lib/vmd/led.o 00:05:24.989 CC lib/idxd/idxd.o 00:05:24.989 CC lib/env_dpdk/init.o 00:05:24.989 CC lib/idxd/idxd_user.o 00:05:24.989 CC lib/env_dpdk/threads.o 00:05:24.989 CC lib/idxd/idxd_kernel.o 00:05:24.989 CC lib/env_dpdk/pci_ioat.o 00:05:24.989 CC lib/env_dpdk/pci_virtio.o 00:05:24.989 CC lib/env_dpdk/pci_vmd.o 00:05:24.989 CC lib/env_dpdk/pci_idxd.o 00:05:24.989 CC lib/env_dpdk/pci_event.o 00:05:24.989 CC lib/env_dpdk/sigbus_handler.o 00:05:24.989 CC lib/env_dpdk/pci_dpdk.o 00:05:24.989 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:24.989 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:25.251 LIB libspdk_conf.a 00:05:25.251 LIB libspdk_rdma_utils.a 00:05:25.251 LIB libspdk_json.a 00:05:25.251 SO libspdk_conf.so.6.0 00:05:25.251 SO libspdk_rdma_utils.so.1.0 00:05:25.251 SO libspdk_json.so.6.0 00:05:25.251 SYMLINK libspdk_conf.so 00:05:25.251 SYMLINK libspdk_rdma_utils.so 00:05:25.251 SYMLINK libspdk_json.so 00:05:25.513 LIB libspdk_idxd.a 00:05:25.513 LIB libspdk_vmd.a 00:05:25.513 SO libspdk_idxd.so.12.1 00:05:25.513 SO libspdk_vmd.so.6.0 00:05:25.513 SYMLINK libspdk_idxd.so 00:05:25.775 SYMLINK libspdk_vmd.so 00:05:25.775 CC lib/rdma_provider/common.o 00:05:25.775 CC lib/jsonrpc/jsonrpc_server.o 00:05:25.775 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:25.775 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:25.775 CC lib/jsonrpc/jsonrpc_client.o 00:05:25.775 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:26.036 LIB libspdk_rdma_provider.a 00:05:26.036 LIB libspdk_jsonrpc.a 00:05:26.036 SO libspdk_rdma_provider.so.7.0 00:05:26.036 SO libspdk_jsonrpc.so.6.0 00:05:26.036 SYMLINK libspdk_rdma_provider.so 00:05:26.036 SYMLINK libspdk_jsonrpc.so 00:05:26.298 LIB libspdk_env_dpdk.a 00:05:26.298 SO libspdk_env_dpdk.so.15.1 00:05:26.298 SYMLINK libspdk_env_dpdk.so 00:05:26.298 CC lib/rpc/rpc.o 00:05:26.560 LIB libspdk_rpc.a 00:05:26.560 SO libspdk_rpc.so.6.0 00:05:26.822 SYMLINK libspdk_rpc.so 00:05:27.083 CC lib/trace/trace.o 00:05:27.083 CC lib/trace/trace_flags.o 00:05:27.083 CC lib/trace/trace_rpc.o 00:05:27.083 CC lib/keyring/keyring.o 00:05:27.083 CC lib/keyring/keyring_rpc.o 00:05:27.083 CC lib/notify/notify.o 00:05:27.083 CC lib/notify/notify_rpc.o 00:05:27.344 LIB libspdk_notify.a 00:05:27.344 SO libspdk_notify.so.6.0 00:05:27.344 LIB libspdk_keyring.a 00:05:27.344 LIB libspdk_trace.a 00:05:27.344 SO libspdk_keyring.so.2.0 00:05:27.344 SYMLINK libspdk_notify.so 00:05:27.344 SO libspdk_trace.so.11.0 00:05:27.344 SYMLINK libspdk_keyring.so 00:05:27.603 SYMLINK libspdk_trace.so 00:05:27.863 CC lib/sock/sock.o 00:05:27.863 CC lib/sock/sock_rpc.o 00:05:27.863 CC lib/thread/thread.o 00:05:27.863 CC lib/thread/iobuf.o 00:05:28.123 LIB libspdk_sock.a 00:05:28.123 SO libspdk_sock.so.10.0 00:05:28.384 SYMLINK libspdk_sock.so 00:05:28.644 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:28.644 CC lib/nvme/nvme_ctrlr.o 00:05:28.644 CC lib/nvme/nvme_fabric.o 00:05:28.644 CC lib/nvme/nvme_ns_cmd.o 00:05:28.644 CC lib/nvme/nvme_ns.o 00:05:28.644 CC lib/nvme/nvme_pcie_common.o 00:05:28.644 CC lib/nvme/nvme_pcie.o 00:05:28.644 CC lib/nvme/nvme_qpair.o 00:05:28.644 CC lib/nvme/nvme.o 00:05:28.644 CC lib/nvme/nvme_quirks.o 00:05:28.644 CC lib/nvme/nvme_transport.o 00:05:28.644 CC lib/nvme/nvme_discovery.o 00:05:28.644 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:28.644 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:28.644 CC lib/nvme/nvme_tcp.o 00:05:28.644 CC lib/nvme/nvme_opal.o 00:05:28.644 CC lib/nvme/nvme_io_msg.o 00:05:28.644 CC lib/nvme/nvme_poll_group.o 00:05:28.644 CC lib/nvme/nvme_zns.o 00:05:28.644 CC lib/nvme/nvme_stubs.o 00:05:28.644 CC lib/nvme/nvme_auth.o 00:05:28.644 CC lib/nvme/nvme_cuse.o 00:05:28.644 CC lib/nvme/nvme_vfio_user.o 00:05:28.644 CC lib/nvme/nvme_rdma.o 00:05:28.644 LIB libspdk_thread.a 00:05:28.905 SO libspdk_thread.so.11.0 00:05:28.905 SYMLINK libspdk_thread.so 00:05:29.164 CC lib/virtio/virtio.o 00:05:29.164 CC lib/accel/accel_rpc.o 00:05:29.164 CC lib/virtio/virtio_vfio_user.o 00:05:29.164 CC lib/accel/accel.o 00:05:29.164 CC lib/accel/accel_sw.o 00:05:29.164 CC lib/virtio/virtio_vhost_user.o 00:05:29.164 CC lib/virtio/virtio_pci.o 00:05:29.164 CC lib/vfu_tgt/tgt_endpoint.o 00:05:29.164 CC lib/vfu_tgt/tgt_rpc.o 00:05:29.164 CC lib/blob/blobstore.o 00:05:29.164 CC lib/init/json_config.o 00:05:29.164 CC lib/blob/request.o 00:05:29.164 CC lib/blob/blob_bs_dev.o 00:05:29.164 CC lib/blob/zeroes.o 00:05:29.164 CC lib/fsdev/fsdev.o 00:05:29.164 CC lib/fsdev/fsdev_rpc.o 00:05:29.164 CC lib/init/subsystem.o 00:05:29.164 CC lib/fsdev/fsdev_io.o 00:05:29.164 CC lib/init/subsystem_rpc.o 00:05:29.164 CC lib/init/rpc.o 00:05:29.425 LIB libspdk_init.a 00:05:29.425 SO libspdk_init.so.6.0 00:05:29.425 LIB libspdk_virtio.a 00:05:29.425 LIB libspdk_vfu_tgt.a 00:05:29.686 SYMLINK libspdk_init.so 00:05:29.686 SO libspdk_virtio.so.7.0 00:05:29.686 SO libspdk_vfu_tgt.so.3.0 00:05:29.686 SYMLINK libspdk_virtio.so 00:05:29.686 SYMLINK libspdk_vfu_tgt.so 00:05:29.947 LIB libspdk_fsdev.a 00:05:29.947 SO libspdk_fsdev.so.2.0 00:05:29.947 CC lib/event/app.o 00:05:29.947 CC lib/event/reactor.o 00:05:29.947 CC lib/event/log_rpc.o 00:05:29.947 CC lib/event/app_rpc.o 00:05:29.947 CC lib/event/scheduler_static.o 00:05:29.947 SYMLINK libspdk_fsdev.so 00:05:30.208 LIB libspdk_accel.a 00:05:30.208 SO libspdk_accel.so.16.0 00:05:30.208 SYMLINK libspdk_accel.so 00:05:30.208 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:30.469 LIB libspdk_event.a 00:05:30.469 SO libspdk_event.so.14.0 00:05:30.469 SYMLINK libspdk_event.so 00:05:30.729 LIB libspdk_nvme.a 00:05:30.730 CC lib/bdev/bdev.o 00:05:30.730 CC lib/bdev/bdev_rpc.o 00:05:30.730 CC lib/bdev/bdev_zone.o 00:05:30.730 CC lib/bdev/part.o 00:05:30.730 CC lib/bdev/scsi_nvme.o 00:05:30.730 SO libspdk_nvme.so.15.0 00:05:31.033 LIB libspdk_fuse_dispatcher.a 00:05:31.033 SO libspdk_fuse_dispatcher.so.1.0 00:05:31.033 SYMLINK libspdk_nvme.so 00:05:31.033 SYMLINK libspdk_fuse_dispatcher.so 00:05:32.038 LIB libspdk_blob.a 00:05:32.038 SO libspdk_blob.so.12.0 00:05:32.038 SYMLINK libspdk_blob.so 00:05:32.298 CC lib/blobfs/blobfs.o 00:05:32.298 CC lib/blobfs/tree.o 00:05:32.298 CC lib/lvol/lvol.o 00:05:33.241 LIB libspdk_bdev.a 00:05:33.241 LIB libspdk_blobfs.a 00:05:33.241 SO libspdk_bdev.so.17.0 00:05:33.241 SO libspdk_blobfs.so.11.0 00:05:33.241 LIB libspdk_lvol.a 00:05:33.241 SYMLINK libspdk_bdev.so 00:05:33.241 SYMLINK libspdk_blobfs.so 00:05:33.241 SO libspdk_lvol.so.11.0 00:05:33.241 SYMLINK libspdk_lvol.so 00:05:33.501 CC lib/nvmf/ctrlr.o 00:05:33.501 CC lib/nvmf/ctrlr_discovery.o 00:05:33.501 CC lib/nvmf/ctrlr_bdev.o 00:05:33.501 CC lib/nvmf/subsystem.o 00:05:33.501 CC lib/nvmf/nvmf.o 00:05:33.501 CC lib/nvmf/nvmf_rpc.o 00:05:33.501 CC lib/nvmf/transport.o 00:05:33.501 CC lib/nvmf/tcp.o 00:05:33.501 CC lib/nbd/nbd.o 00:05:33.501 CC lib/nvmf/stubs.o 00:05:33.501 CC lib/nbd/nbd_rpc.o 00:05:33.501 CC lib/nvmf/mdns_server.o 00:05:33.501 CC lib/nvmf/vfio_user.o 00:05:33.501 CC lib/nvmf/rdma.o 00:05:33.501 CC lib/nvmf/auth.o 00:05:33.501 CC lib/scsi/dev.o 00:05:33.501 CC lib/scsi/lun.o 00:05:33.501 CC lib/ftl/ftl_core.o 00:05:33.501 CC lib/scsi/port.o 00:05:33.501 CC lib/ublk/ublk.o 00:05:33.501 CC lib/ublk/ublk_rpc.o 00:05:33.501 CC lib/ftl/ftl_init.o 00:05:33.501 CC lib/scsi/scsi.o 00:05:33.501 CC lib/scsi/scsi_bdev.o 00:05:33.501 CC lib/ftl/ftl_layout.o 00:05:33.501 CC lib/scsi/scsi_pr.o 00:05:33.501 CC lib/ftl/ftl_debug.o 00:05:33.501 CC lib/ftl/ftl_io.o 00:05:33.501 CC lib/scsi/scsi_rpc.o 00:05:33.501 CC lib/scsi/task.o 00:05:33.501 CC lib/ftl/ftl_sb.o 00:05:33.501 CC lib/ftl/ftl_l2p.o 00:05:33.501 CC lib/ftl/ftl_l2p_flat.o 00:05:33.763 CC lib/ftl/ftl_nv_cache.o 00:05:33.763 CC lib/ftl/ftl_band.o 00:05:33.763 CC lib/ftl/ftl_band_ops.o 00:05:33.763 CC lib/ftl/ftl_writer.o 00:05:33.763 CC lib/ftl/ftl_rq.o 00:05:33.763 CC lib/ftl/ftl_reloc.o 00:05:33.763 CC lib/ftl/ftl_l2p_cache.o 00:05:33.763 CC lib/ftl/ftl_p2l.o 00:05:33.763 CC lib/ftl/ftl_p2l_log.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:33.763 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:33.763 CC lib/ftl/utils/ftl_conf.o 00:05:33.763 CC lib/ftl/utils/ftl_md.o 00:05:33.763 CC lib/ftl/utils/ftl_mempool.o 00:05:33.763 CC lib/ftl/utils/ftl_bitmap.o 00:05:33.763 CC lib/ftl/utils/ftl_property.o 00:05:33.763 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:33.763 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:33.763 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:33.763 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:33.763 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:33.763 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:33.763 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:33.763 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:33.763 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:33.763 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:33.763 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:33.763 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:33.763 CC lib/ftl/base/ftl_base_dev.o 00:05:33.763 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:33.763 CC lib/ftl/ftl_trace.o 00:05:33.763 CC lib/ftl/base/ftl_base_bdev.o 00:05:34.333 LIB libspdk_nbd.a 00:05:34.333 SO libspdk_nbd.so.7.0 00:05:34.333 SYMLINK libspdk_nbd.so 00:05:34.333 LIB libspdk_scsi.a 00:05:34.594 SO libspdk_scsi.so.9.0 00:05:34.594 LIB libspdk_ublk.a 00:05:34.594 SYMLINK libspdk_scsi.so 00:05:34.594 SO libspdk_ublk.so.3.0 00:05:34.594 SYMLINK libspdk_ublk.so 00:05:34.855 LIB libspdk_ftl.a 00:05:34.855 CC lib/iscsi/conn.o 00:05:34.855 CC lib/iscsi/init_grp.o 00:05:34.855 CC lib/iscsi/iscsi.o 00:05:34.855 CC lib/iscsi/param.o 00:05:34.855 CC lib/iscsi/portal_grp.o 00:05:34.855 CC lib/iscsi/tgt_node.o 00:05:34.855 CC lib/vhost/vhost.o 00:05:34.855 CC lib/iscsi/iscsi_subsystem.o 00:05:34.855 CC lib/vhost/vhost_rpc.o 00:05:34.855 CC lib/iscsi/iscsi_rpc.o 00:05:34.855 CC lib/vhost/vhost_scsi.o 00:05:34.855 CC lib/iscsi/task.o 00:05:34.855 CC lib/vhost/vhost_blk.o 00:05:34.855 CC lib/vhost/rte_vhost_user.o 00:05:35.117 SO libspdk_ftl.so.9.0 00:05:35.378 SYMLINK libspdk_ftl.so 00:05:35.640 LIB libspdk_nvmf.a 00:05:35.901 SO libspdk_nvmf.so.20.0 00:05:35.901 LIB libspdk_vhost.a 00:05:35.901 SO libspdk_vhost.so.8.0 00:05:35.901 SYMLINK libspdk_nvmf.so 00:05:36.162 SYMLINK libspdk_vhost.so 00:05:36.162 LIB libspdk_iscsi.a 00:05:36.162 SO libspdk_iscsi.so.8.0 00:05:36.423 SYMLINK libspdk_iscsi.so 00:05:36.995 CC module/env_dpdk/env_dpdk_rpc.o 00:05:36.995 CC module/vfu_device/vfu_virtio.o 00:05:36.995 CC module/vfu_device/vfu_virtio_blk.o 00:05:36.995 CC module/vfu_device/vfu_virtio_scsi.o 00:05:36.995 CC module/vfu_device/vfu_virtio_rpc.o 00:05:36.995 CC module/vfu_device/vfu_virtio_fs.o 00:05:36.995 LIB libspdk_env_dpdk_rpc.a 00:05:37.257 CC module/keyring/file/keyring_rpc.o 00:05:37.257 CC module/keyring/file/keyring.o 00:05:37.257 CC module/sock/posix/posix.o 00:05:37.257 CC module/blob/bdev/blob_bdev.o 00:05:37.257 CC module/fsdev/aio/fsdev_aio.o 00:05:37.257 CC module/keyring/linux/keyring.o 00:05:37.257 CC module/accel/iaa/accel_iaa.o 00:05:37.257 CC module/scheduler/gscheduler/gscheduler.o 00:05:37.257 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:37.257 CC module/keyring/linux/keyring_rpc.o 00:05:37.257 CC module/accel/iaa/accel_iaa_rpc.o 00:05:37.257 CC module/fsdev/aio/linux_aio_mgr.o 00:05:37.257 CC module/accel/ioat/accel_ioat.o 00:05:37.257 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:37.257 CC module/accel/ioat/accel_ioat_rpc.o 00:05:37.257 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:37.257 CC module/accel/error/accel_error.o 00:05:37.257 CC module/accel/error/accel_error_rpc.o 00:05:37.257 CC module/accel/dsa/accel_dsa.o 00:05:37.257 CC module/accel/dsa/accel_dsa_rpc.o 00:05:37.257 SO libspdk_env_dpdk_rpc.so.6.0 00:05:37.257 SYMLINK libspdk_env_dpdk_rpc.so 00:05:37.257 LIB libspdk_keyring_file.a 00:05:37.257 LIB libspdk_scheduler_dpdk_governor.a 00:05:37.257 LIB libspdk_keyring_linux.a 00:05:37.257 LIB libspdk_scheduler_gscheduler.a 00:05:37.257 SO libspdk_keyring_file.so.2.0 00:05:37.257 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:37.257 SO libspdk_keyring_linux.so.1.0 00:05:37.519 LIB libspdk_scheduler_dynamic.a 00:05:37.519 SO libspdk_scheduler_gscheduler.so.4.0 00:05:37.519 LIB libspdk_accel_error.a 00:05:37.519 LIB libspdk_accel_ioat.a 00:05:37.519 LIB libspdk_accel_iaa.a 00:05:37.519 SYMLINK libspdk_keyring_file.so 00:05:37.519 SO libspdk_scheduler_dynamic.so.4.0 00:05:37.519 SO libspdk_accel_ioat.so.6.0 00:05:37.519 SO libspdk_accel_error.so.2.0 00:05:37.519 LIB libspdk_blob_bdev.a 00:05:37.519 SO libspdk_accel_iaa.so.3.0 00:05:37.519 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:37.519 SYMLINK libspdk_keyring_linux.so 00:05:37.519 SYMLINK libspdk_scheduler_gscheduler.so 00:05:37.519 LIB libspdk_accel_dsa.a 00:05:37.519 SO libspdk_blob_bdev.so.12.0 00:05:37.519 SYMLINK libspdk_scheduler_dynamic.so 00:05:37.519 SYMLINK libspdk_accel_ioat.so 00:05:37.519 SO libspdk_accel_dsa.so.5.0 00:05:37.519 SYMLINK libspdk_accel_error.so 00:05:37.519 SYMLINK libspdk_accel_iaa.so 00:05:37.519 SYMLINK libspdk_blob_bdev.so 00:05:37.519 LIB libspdk_vfu_device.a 00:05:37.519 SYMLINK libspdk_accel_dsa.so 00:05:37.519 SO libspdk_vfu_device.so.3.0 00:05:37.780 SYMLINK libspdk_vfu_device.so 00:05:37.780 LIB libspdk_fsdev_aio.a 00:05:37.780 SO libspdk_fsdev_aio.so.1.0 00:05:37.780 LIB libspdk_sock_posix.a 00:05:37.780 SO libspdk_sock_posix.so.6.0 00:05:38.041 SYMLINK libspdk_fsdev_aio.so 00:05:38.041 SYMLINK libspdk_sock_posix.so 00:05:38.041 CC module/bdev/delay/vbdev_delay.o 00:05:38.041 CC module/bdev/gpt/gpt.o 00:05:38.041 CC module/bdev/gpt/vbdev_gpt.o 00:05:38.041 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:38.041 CC module/blobfs/bdev/blobfs_bdev.o 00:05:38.041 CC module/bdev/null/bdev_null.o 00:05:38.041 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:38.041 CC module/bdev/null/bdev_null_rpc.o 00:05:38.041 CC module/bdev/error/vbdev_error.o 00:05:38.041 CC module/bdev/split/vbdev_split.o 00:05:38.041 CC module/bdev/error/vbdev_error_rpc.o 00:05:38.041 CC module/bdev/split/vbdev_split_rpc.o 00:05:38.041 CC module/bdev/aio/bdev_aio.o 00:05:38.041 CC module/bdev/malloc/bdev_malloc.o 00:05:38.041 CC module/bdev/aio/bdev_aio_rpc.o 00:05:38.041 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:38.041 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:38.041 CC module/bdev/raid/bdev_raid.o 00:05:38.041 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:38.041 CC module/bdev/raid/bdev_raid_rpc.o 00:05:38.041 CC module/bdev/raid/bdev_raid_sb.o 00:05:38.041 CC module/bdev/passthru/vbdev_passthru.o 00:05:38.041 CC module/bdev/raid/raid0.o 00:05:38.041 CC module/bdev/lvol/vbdev_lvol.o 00:05:38.041 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:38.041 CC module/bdev/raid/raid1.o 00:05:38.041 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:38.041 CC module/bdev/raid/concat.o 00:05:38.041 CC module/bdev/nvme/bdev_nvme.o 00:05:38.041 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:38.041 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:38.041 CC module/bdev/ftl/bdev_ftl.o 00:05:38.041 CC module/bdev/nvme/nvme_rpc.o 00:05:38.041 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:38.041 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:38.042 CC module/bdev/iscsi/bdev_iscsi.o 00:05:38.042 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:38.042 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:38.042 CC module/bdev/nvme/bdev_mdns_client.o 00:05:38.042 CC module/bdev/nvme/vbdev_opal.o 00:05:38.042 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:38.042 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:38.304 LIB libspdk_blobfs_bdev.a 00:05:38.565 SO libspdk_blobfs_bdev.so.6.0 00:05:38.565 LIB libspdk_bdev_split.a 00:05:38.565 SO libspdk_bdev_split.so.6.0 00:05:38.565 LIB libspdk_bdev_error.a 00:05:38.565 LIB libspdk_bdev_null.a 00:05:38.565 SYMLINK libspdk_blobfs_bdev.so 00:05:38.565 LIB libspdk_bdev_gpt.a 00:05:38.565 SO libspdk_bdev_null.so.6.0 00:05:38.565 SO libspdk_bdev_error.so.6.0 00:05:38.565 LIB libspdk_bdev_passthru.a 00:05:38.565 LIB libspdk_bdev_aio.a 00:05:38.565 SYMLINK libspdk_bdev_split.so 00:05:38.565 SO libspdk_bdev_gpt.so.6.0 00:05:38.565 LIB libspdk_bdev_ftl.a 00:05:38.565 SO libspdk_bdev_passthru.so.6.0 00:05:38.565 LIB libspdk_bdev_delay.a 00:05:38.565 LIB libspdk_bdev_zone_block.a 00:05:38.565 SO libspdk_bdev_aio.so.6.0 00:05:38.565 LIB libspdk_bdev_malloc.a 00:05:38.565 SYMLINK libspdk_bdev_null.so 00:05:38.565 SO libspdk_bdev_ftl.so.6.0 00:05:38.565 SYMLINK libspdk_bdev_gpt.so 00:05:38.565 SYMLINK libspdk_bdev_error.so 00:05:38.565 SO libspdk_bdev_zone_block.so.6.0 00:05:38.565 SO libspdk_bdev_delay.so.6.0 00:05:38.565 SO libspdk_bdev_malloc.so.6.0 00:05:38.565 LIB libspdk_bdev_iscsi.a 00:05:38.565 SYMLINK libspdk_bdev_passthru.so 00:05:38.565 SYMLINK libspdk_bdev_aio.so 00:05:38.827 SYMLINK libspdk_bdev_ftl.so 00:05:38.827 SO libspdk_bdev_iscsi.so.6.0 00:05:38.827 SYMLINK libspdk_bdev_delay.so 00:05:38.827 SYMLINK libspdk_bdev_zone_block.so 00:05:38.827 SYMLINK libspdk_bdev_malloc.so 00:05:38.827 LIB libspdk_bdev_virtio.a 00:05:38.827 LIB libspdk_bdev_lvol.a 00:05:38.827 SO libspdk_bdev_virtio.so.6.0 00:05:38.827 SYMLINK libspdk_bdev_iscsi.so 00:05:38.827 SO libspdk_bdev_lvol.so.6.0 00:05:38.827 SYMLINK libspdk_bdev_virtio.so 00:05:38.827 SYMLINK libspdk_bdev_lvol.so 00:05:39.088 LIB libspdk_bdev_raid.a 00:05:39.349 SO libspdk_bdev_raid.so.6.0 00:05:39.349 SYMLINK libspdk_bdev_raid.so 00:05:40.738 LIB libspdk_bdev_nvme.a 00:05:40.738 SO libspdk_bdev_nvme.so.7.1 00:05:40.738 SYMLINK libspdk_bdev_nvme.so 00:05:41.311 CC module/event/subsystems/vmd/vmd.o 00:05:41.311 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:41.311 CC module/event/subsystems/iobuf/iobuf.o 00:05:41.311 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:41.311 CC module/event/subsystems/sock/sock.o 00:05:41.311 CC module/event/subsystems/scheduler/scheduler.o 00:05:41.311 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:41.311 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:41.311 CC module/event/subsystems/keyring/keyring.o 00:05:41.311 CC module/event/subsystems/fsdev/fsdev.o 00:05:41.574 LIB libspdk_event_scheduler.a 00:05:41.574 LIB libspdk_event_vfu_tgt.a 00:05:41.574 LIB libspdk_event_vmd.a 00:05:41.574 LIB libspdk_event_keyring.a 00:05:41.574 LIB libspdk_event_vhost_blk.a 00:05:41.574 LIB libspdk_event_fsdev.a 00:05:41.574 LIB libspdk_event_sock.a 00:05:41.574 LIB libspdk_event_iobuf.a 00:05:41.574 SO libspdk_event_keyring.so.1.0 00:05:41.574 SO libspdk_event_scheduler.so.4.0 00:05:41.574 SO libspdk_event_vfu_tgt.so.3.0 00:05:41.574 SO libspdk_event_sock.so.5.0 00:05:41.574 SO libspdk_event_vmd.so.6.0 00:05:41.574 SO libspdk_event_vhost_blk.so.3.0 00:05:41.574 SO libspdk_event_fsdev.so.1.0 00:05:41.574 SO libspdk_event_iobuf.so.3.0 00:05:41.574 SYMLINK libspdk_event_keyring.so 00:05:41.574 SYMLINK libspdk_event_vfu_tgt.so 00:05:41.574 SYMLINK libspdk_event_scheduler.so 00:05:41.574 SYMLINK libspdk_event_sock.so 00:05:41.574 SYMLINK libspdk_event_vhost_blk.so 00:05:41.837 SYMLINK libspdk_event_vmd.so 00:05:41.837 SYMLINK libspdk_event_fsdev.so 00:05:41.837 SYMLINK libspdk_event_iobuf.so 00:05:42.098 CC module/event/subsystems/accel/accel.o 00:05:42.359 LIB libspdk_event_accel.a 00:05:42.360 SO libspdk_event_accel.so.6.0 00:05:42.360 SYMLINK libspdk_event_accel.so 00:05:42.621 CC module/event/subsystems/bdev/bdev.o 00:05:42.881 LIB libspdk_event_bdev.a 00:05:42.881 SO libspdk_event_bdev.so.6.0 00:05:42.881 SYMLINK libspdk_event_bdev.so 00:05:43.454 CC module/event/subsystems/scsi/scsi.o 00:05:43.454 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:43.454 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:43.454 CC module/event/subsystems/nbd/nbd.o 00:05:43.454 CC module/event/subsystems/ublk/ublk.o 00:05:43.454 LIB libspdk_event_nbd.a 00:05:43.454 LIB libspdk_event_ublk.a 00:05:43.454 LIB libspdk_event_scsi.a 00:05:43.454 SO libspdk_event_nbd.so.6.0 00:05:43.454 SO libspdk_event_ublk.so.3.0 00:05:43.454 SO libspdk_event_scsi.so.6.0 00:05:43.715 LIB libspdk_event_nvmf.a 00:05:43.715 SYMLINK libspdk_event_nbd.so 00:05:43.715 SYMLINK libspdk_event_ublk.so 00:05:43.715 SYMLINK libspdk_event_scsi.so 00:05:43.715 SO libspdk_event_nvmf.so.6.0 00:05:43.715 SYMLINK libspdk_event_nvmf.so 00:05:43.976 CC module/event/subsystems/iscsi/iscsi.o 00:05:43.976 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:44.239 LIB libspdk_event_vhost_scsi.a 00:05:44.239 LIB libspdk_event_iscsi.a 00:05:44.239 SO libspdk_event_vhost_scsi.so.3.0 00:05:44.239 SO libspdk_event_iscsi.so.6.0 00:05:44.239 SYMLINK libspdk_event_vhost_scsi.so 00:05:44.239 SYMLINK libspdk_event_iscsi.so 00:05:44.500 SO libspdk.so.6.0 00:05:44.500 SYMLINK libspdk.so 00:05:45.081 CC app/trace_record/trace_record.o 00:05:45.081 CXX app/trace/trace.o 00:05:45.081 CC test/rpc_client/rpc_client_test.o 00:05:45.081 CC app/spdk_nvme_perf/perf.o 00:05:45.081 CC app/spdk_nvme_identify/identify.o 00:05:45.081 CC app/spdk_lspci/spdk_lspci.o 00:05:45.081 CC app/spdk_top/spdk_top.o 00:05:45.081 CC app/spdk_nvme_discover/discovery_aer.o 00:05:45.081 TEST_HEADER include/spdk/accel.h 00:05:45.081 TEST_HEADER include/spdk/accel_module.h 00:05:45.081 TEST_HEADER include/spdk/assert.h 00:05:45.081 TEST_HEADER include/spdk/barrier.h 00:05:45.081 TEST_HEADER include/spdk/base64.h 00:05:45.081 TEST_HEADER include/spdk/bdev_module.h 00:05:45.081 TEST_HEADER include/spdk/bdev.h 00:05:45.081 TEST_HEADER include/spdk/bdev_zone.h 00:05:45.081 TEST_HEADER include/spdk/bit_pool.h 00:05:45.081 TEST_HEADER include/spdk/bit_array.h 00:05:45.081 TEST_HEADER include/spdk/blob_bdev.h 00:05:45.081 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:45.081 TEST_HEADER include/spdk/blobfs.h 00:05:45.081 TEST_HEADER include/spdk/blob.h 00:05:45.081 TEST_HEADER include/spdk/conf.h 00:05:45.081 TEST_HEADER include/spdk/config.h 00:05:45.081 TEST_HEADER include/spdk/cpuset.h 00:05:45.081 TEST_HEADER include/spdk/crc16.h 00:05:45.081 TEST_HEADER include/spdk/crc32.h 00:05:45.081 TEST_HEADER include/spdk/crc64.h 00:05:45.081 TEST_HEADER include/spdk/dif.h 00:05:45.081 TEST_HEADER include/spdk/dma.h 00:05:45.081 TEST_HEADER include/spdk/endian.h 00:05:45.081 TEST_HEADER include/spdk/env_dpdk.h 00:05:45.081 TEST_HEADER include/spdk/env.h 00:05:45.081 TEST_HEADER include/spdk/event.h 00:05:45.081 TEST_HEADER include/spdk/fd_group.h 00:05:45.081 TEST_HEADER include/spdk/file.h 00:05:45.081 TEST_HEADER include/spdk/fd.h 00:05:45.081 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:45.081 TEST_HEADER include/spdk/fsdev.h 00:05:45.081 CC app/spdk_dd/spdk_dd.o 00:05:45.081 TEST_HEADER include/spdk/ftl.h 00:05:45.081 TEST_HEADER include/spdk/fsdev_module.h 00:05:45.081 CC app/iscsi_tgt/iscsi_tgt.o 00:05:45.081 CC app/nvmf_tgt/nvmf_main.o 00:05:45.081 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:45.081 TEST_HEADER include/spdk/gpt_spec.h 00:05:45.081 TEST_HEADER include/spdk/hexlify.h 00:05:45.081 TEST_HEADER include/spdk/histogram_data.h 00:05:45.081 TEST_HEADER include/spdk/idxd_spec.h 00:05:45.081 TEST_HEADER include/spdk/idxd.h 00:05:45.081 TEST_HEADER include/spdk/init.h 00:05:45.081 TEST_HEADER include/spdk/ioat.h 00:05:45.081 TEST_HEADER include/spdk/ioat_spec.h 00:05:45.081 TEST_HEADER include/spdk/iscsi_spec.h 00:05:45.081 TEST_HEADER include/spdk/json.h 00:05:45.081 TEST_HEADER include/spdk/jsonrpc.h 00:05:45.081 TEST_HEADER include/spdk/keyring.h 00:05:45.081 CC app/spdk_tgt/spdk_tgt.o 00:05:45.081 TEST_HEADER include/spdk/keyring_module.h 00:05:45.081 TEST_HEADER include/spdk/likely.h 00:05:45.081 TEST_HEADER include/spdk/lvol.h 00:05:45.081 TEST_HEADER include/spdk/log.h 00:05:45.081 TEST_HEADER include/spdk/md5.h 00:05:45.081 TEST_HEADER include/spdk/mmio.h 00:05:45.081 TEST_HEADER include/spdk/memory.h 00:05:45.081 TEST_HEADER include/spdk/nbd.h 00:05:45.081 TEST_HEADER include/spdk/net.h 00:05:45.081 TEST_HEADER include/spdk/nvme.h 00:05:45.081 TEST_HEADER include/spdk/notify.h 00:05:45.081 TEST_HEADER include/spdk/nvme_intel.h 00:05:45.081 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:45.081 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:45.081 TEST_HEADER include/spdk/nvme_spec.h 00:05:45.081 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:45.081 TEST_HEADER include/spdk/nvme_zns.h 00:05:45.081 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:45.081 TEST_HEADER include/spdk/nvmf.h 00:05:45.081 TEST_HEADER include/spdk/nvmf_spec.h 00:05:45.081 TEST_HEADER include/spdk/opal.h 00:05:45.081 TEST_HEADER include/spdk/nvmf_transport.h 00:05:45.081 TEST_HEADER include/spdk/opal_spec.h 00:05:45.081 TEST_HEADER include/spdk/pci_ids.h 00:05:45.081 TEST_HEADER include/spdk/pipe.h 00:05:45.081 TEST_HEADER include/spdk/queue.h 00:05:45.081 TEST_HEADER include/spdk/reduce.h 00:05:45.081 TEST_HEADER include/spdk/rpc.h 00:05:45.081 TEST_HEADER include/spdk/scheduler.h 00:05:45.081 TEST_HEADER include/spdk/scsi.h 00:05:45.081 TEST_HEADER include/spdk/scsi_spec.h 00:05:45.081 TEST_HEADER include/spdk/sock.h 00:05:45.081 TEST_HEADER include/spdk/stdinc.h 00:05:45.081 TEST_HEADER include/spdk/string.h 00:05:45.081 TEST_HEADER include/spdk/thread.h 00:05:45.081 TEST_HEADER include/spdk/trace.h 00:05:45.081 TEST_HEADER include/spdk/trace_parser.h 00:05:45.081 TEST_HEADER include/spdk/tree.h 00:05:45.081 TEST_HEADER include/spdk/ublk.h 00:05:45.081 TEST_HEADER include/spdk/util.h 00:05:45.081 TEST_HEADER include/spdk/uuid.h 00:05:45.081 TEST_HEADER include/spdk/version.h 00:05:45.081 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:45.081 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:45.081 TEST_HEADER include/spdk/vhost.h 00:05:45.081 TEST_HEADER include/spdk/vmd.h 00:05:45.081 TEST_HEADER include/spdk/xor.h 00:05:45.081 CXX test/cpp_headers/accel.o 00:05:45.081 TEST_HEADER include/spdk/zipf.h 00:05:45.081 CXX test/cpp_headers/accel_module.o 00:05:45.081 CXX test/cpp_headers/assert.o 00:05:45.082 CXX test/cpp_headers/barrier.o 00:05:45.082 CXX test/cpp_headers/base64.o 00:05:45.082 CXX test/cpp_headers/bdev.o 00:05:45.082 CXX test/cpp_headers/bdev_zone.o 00:05:45.082 CXX test/cpp_headers/bdev_module.o 00:05:45.082 CXX test/cpp_headers/bit_array.o 00:05:45.082 CXX test/cpp_headers/bit_pool.o 00:05:45.082 CXX test/cpp_headers/blob_bdev.o 00:05:45.082 CXX test/cpp_headers/blobfs_bdev.o 00:05:45.082 CXX test/cpp_headers/blobfs.o 00:05:45.082 CXX test/cpp_headers/blob.o 00:05:45.082 CXX test/cpp_headers/conf.o 00:05:45.082 CXX test/cpp_headers/config.o 00:05:45.082 CXX test/cpp_headers/cpuset.o 00:05:45.082 CXX test/cpp_headers/crc16.o 00:05:45.082 CXX test/cpp_headers/crc32.o 00:05:45.082 CXX test/cpp_headers/crc64.o 00:05:45.082 CXX test/cpp_headers/dif.o 00:05:45.082 CXX test/cpp_headers/dma.o 00:05:45.082 CXX test/cpp_headers/endian.o 00:05:45.082 CXX test/cpp_headers/env_dpdk.o 00:05:45.082 CXX test/cpp_headers/fd_group.o 00:05:45.082 CXX test/cpp_headers/env.o 00:05:45.082 CXX test/cpp_headers/event.o 00:05:45.082 CXX test/cpp_headers/fd.o 00:05:45.082 CXX test/cpp_headers/fsdev.o 00:05:45.082 CXX test/cpp_headers/fuse_dispatcher.o 00:05:45.082 CXX test/cpp_headers/file.o 00:05:45.082 CXX test/cpp_headers/fsdev_module.o 00:05:45.082 CXX test/cpp_headers/ftl.o 00:05:45.082 CXX test/cpp_headers/hexlify.o 00:05:45.082 CXX test/cpp_headers/gpt_spec.o 00:05:45.082 CXX test/cpp_headers/histogram_data.o 00:05:45.082 CXX test/cpp_headers/idxd.o 00:05:45.082 CXX test/cpp_headers/idxd_spec.o 00:05:45.082 CXX test/cpp_headers/init.o 00:05:45.082 CXX test/cpp_headers/ioat.o 00:05:45.082 CXX test/cpp_headers/ioat_spec.o 00:05:45.082 CXX test/cpp_headers/iscsi_spec.o 00:05:45.082 CXX test/cpp_headers/json.o 00:05:45.082 CXX test/cpp_headers/keyring_module.o 00:05:45.082 CXX test/cpp_headers/jsonrpc.o 00:05:45.082 CXX test/cpp_headers/keyring.o 00:05:45.082 CC examples/util/zipf/zipf.o 00:05:45.082 CXX test/cpp_headers/log.o 00:05:45.082 CXX test/cpp_headers/likely.o 00:05:45.082 CXX test/cpp_headers/md5.o 00:05:45.082 CXX test/cpp_headers/lvol.o 00:05:45.082 CC examples/ioat/verify/verify.o 00:05:45.082 CXX test/cpp_headers/memory.o 00:05:45.082 CXX test/cpp_headers/mmio.o 00:05:45.082 CXX test/cpp_headers/nbd.o 00:05:45.082 CXX test/cpp_headers/nvme.o 00:05:45.082 CXX test/cpp_headers/net.o 00:05:45.082 CXX test/cpp_headers/notify.o 00:05:45.082 CXX test/cpp_headers/nvme_ocssd.o 00:05:45.082 CXX test/cpp_headers/nvme_intel.o 00:05:45.082 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:45.082 CXX test/cpp_headers/nvme_spec.o 00:05:45.082 CC examples/ioat/perf/perf.o 00:05:45.082 CXX test/cpp_headers/nvme_zns.o 00:05:45.082 CXX test/cpp_headers/nvmf_cmd.o 00:05:45.082 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:45.082 CC test/thread/poller_perf/poller_perf.o 00:05:45.082 CC app/fio/nvme/fio_plugin.o 00:05:45.082 CXX test/cpp_headers/nvmf_spec.o 00:05:45.082 CXX test/cpp_headers/nvmf.o 00:05:45.082 LINK spdk_lspci 00:05:45.082 CXX test/cpp_headers/nvmf_transport.o 00:05:45.082 CXX test/cpp_headers/opal.o 00:05:45.082 CC test/app/stub/stub.o 00:05:45.082 CXX test/cpp_headers/opal_spec.o 00:05:45.082 CXX test/cpp_headers/pipe.o 00:05:45.082 CXX test/cpp_headers/pci_ids.o 00:05:45.082 CXX test/cpp_headers/scheduler.o 00:05:45.082 CXX test/cpp_headers/queue.o 00:05:45.082 CXX test/cpp_headers/reduce.o 00:05:45.082 CXX test/cpp_headers/rpc.o 00:05:45.082 CXX test/cpp_headers/scsi_spec.o 00:05:45.353 CXX test/cpp_headers/sock.o 00:05:45.353 CXX test/cpp_headers/scsi.o 00:05:45.353 CXX test/cpp_headers/stdinc.o 00:05:45.353 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:45.353 CXX test/cpp_headers/thread.o 00:05:45.353 CXX test/cpp_headers/string.o 00:05:45.353 CC test/env/memory/memory_ut.o 00:05:45.353 CXX test/cpp_headers/trace_parser.o 00:05:45.353 CXX test/cpp_headers/trace.o 00:05:45.353 CC test/app/jsoncat/jsoncat.o 00:05:45.353 CC test/app/histogram_perf/histogram_perf.o 00:05:45.353 CXX test/cpp_headers/tree.o 00:05:45.353 CXX test/cpp_headers/util.o 00:05:45.353 CXX test/cpp_headers/ublk.o 00:05:45.353 CC test/env/vtophys/vtophys.o 00:05:45.353 CXX test/cpp_headers/uuid.o 00:05:45.353 CXX test/cpp_headers/version.o 00:05:45.353 CXX test/cpp_headers/vfio_user_pci.o 00:05:45.353 CXX test/cpp_headers/vfio_user_spec.o 00:05:45.353 CXX test/cpp_headers/vhost.o 00:05:45.353 CXX test/cpp_headers/vmd.o 00:05:45.353 CXX test/cpp_headers/zipf.o 00:05:45.353 CC test/env/pci/pci_ut.o 00:05:45.353 CXX test/cpp_headers/xor.o 00:05:45.353 CC test/app/bdev_svc/bdev_svc.o 00:05:45.353 CC test/dma/test_dma/test_dma.o 00:05:45.353 CC app/fio/bdev/fio_plugin.o 00:05:45.353 LINK spdk_nvme_discover 00:05:45.353 LINK rpc_client_test 00:05:45.353 LINK interrupt_tgt 00:05:45.353 LINK nvmf_tgt 00:05:45.626 LINK iscsi_tgt 00:05:45.626 LINK spdk_trace_record 00:05:45.626 LINK spdk_tgt 00:05:45.890 CC test/env/mem_callbacks/mem_callbacks.o 00:05:45.890 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:45.890 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:45.890 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:45.890 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:45.890 LINK verify 00:05:46.151 LINK zipf 00:05:46.151 LINK spdk_dd 00:05:46.151 LINK poller_perf 00:05:46.151 LINK jsoncat 00:05:46.151 LINK histogram_perf 00:05:46.151 LINK stub 00:05:46.151 LINK bdev_svc 00:05:46.151 LINK env_dpdk_post_init 00:05:46.411 LINK spdk_trace 00:05:46.411 LINK vtophys 00:05:46.411 LINK ioat_perf 00:05:46.673 LINK spdk_nvme_identify 00:05:46.673 LINK vhost_fuzz 00:05:46.673 LINK pci_ut 00:05:46.673 LINK nvme_fuzz 00:05:46.673 CC examples/vmd/lsvmd/lsvmd.o 00:05:46.673 LINK test_dma 00:05:46.673 LINK spdk_nvme 00:05:46.673 CC examples/idxd/perf/perf.o 00:05:46.673 CC examples/vmd/led/led.o 00:05:46.673 LINK spdk_bdev 00:05:46.673 CC examples/sock/hello_world/hello_sock.o 00:05:46.673 LINK mem_callbacks 00:05:46.673 CC test/event/event_perf/event_perf.o 00:05:46.673 CC examples/thread/thread/thread_ex.o 00:05:46.673 CC test/event/reactor_perf/reactor_perf.o 00:05:46.673 CC test/event/reactor/reactor.o 00:05:46.673 CC test/event/app_repeat/app_repeat.o 00:05:46.673 CC test/event/scheduler/scheduler.o 00:05:46.935 CC app/vhost/vhost.o 00:05:46.935 LINK lsvmd 00:05:46.935 LINK spdk_nvme_perf 00:05:46.935 LINK led 00:05:46.935 LINK spdk_top 00:05:46.935 LINK reactor_perf 00:05:46.935 LINK event_perf 00:05:46.935 LINK reactor 00:05:46.935 LINK hello_sock 00:05:46.935 LINK app_repeat 00:05:46.935 LINK vhost 00:05:46.935 LINK thread 00:05:46.935 LINK idxd_perf 00:05:47.196 LINK scheduler 00:05:47.196 CC test/nvme/e2edp/nvme_dp.o 00:05:47.196 CC test/nvme/aer/aer.o 00:05:47.196 CC test/nvme/sgl/sgl.o 00:05:47.196 CC test/nvme/cuse/cuse.o 00:05:47.196 CC test/nvme/startup/startup.o 00:05:47.196 CC test/nvme/overhead/overhead.o 00:05:47.196 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:47.196 CC test/nvme/err_injection/err_injection.o 00:05:47.196 CC test/nvme/reset/reset.o 00:05:47.196 CC test/nvme/connect_stress/connect_stress.o 00:05:47.196 CC test/nvme/simple_copy/simple_copy.o 00:05:47.196 CC test/nvme/fused_ordering/fused_ordering.o 00:05:47.196 CC test/nvme/reserve/reserve.o 00:05:47.196 CC test/nvme/fdp/fdp.o 00:05:47.196 CC test/nvme/compliance/nvme_compliance.o 00:05:47.196 CC test/nvme/boot_partition/boot_partition.o 00:05:47.196 LINK memory_ut 00:05:47.196 CC test/accel/dif/dif.o 00:05:47.457 CC test/blobfs/mkfs/mkfs.o 00:05:47.457 CC test/lvol/esnap/esnap.o 00:05:47.457 LINK startup 00:05:47.457 LINK boot_partition 00:05:47.457 LINK connect_stress 00:05:47.457 LINK err_injection 00:05:47.457 CC examples/nvme/hotplug/hotplug.o 00:05:47.457 CC examples/nvme/arbitration/arbitration.o 00:05:47.457 LINK fused_ordering 00:05:47.457 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:47.457 LINK doorbell_aers 00:05:47.457 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:47.457 CC examples/nvme/hello_world/hello_world.o 00:05:47.457 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:47.457 CC examples/nvme/abort/abort.o 00:05:47.718 CC examples/nvme/reconnect/reconnect.o 00:05:47.718 LINK reserve 00:05:47.718 LINK sgl 00:05:47.718 LINK simple_copy 00:05:47.718 LINK aer 00:05:47.718 LINK nvme_dp 00:05:47.718 LINK mkfs 00:05:47.718 LINK reset 00:05:47.718 CC examples/accel/perf/accel_perf.o 00:05:47.718 LINK overhead 00:05:47.718 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:47.718 LINK fdp 00:05:47.718 LINK nvme_compliance 00:05:47.718 CC examples/blob/hello_world/hello_blob.o 00:05:47.718 CC examples/blob/cli/blobcli.o 00:05:47.718 LINK iscsi_fuzz 00:05:47.718 LINK pmr_persistence 00:05:47.718 LINK cmb_copy 00:05:47.718 LINK hotplug 00:05:47.979 LINK hello_world 00:05:47.979 LINK arbitration 00:05:47.979 LINK reconnect 00:05:47.979 LINK abort 00:05:47.979 LINK hello_blob 00:05:47.979 LINK hello_fsdev 00:05:47.979 LINK dif 00:05:47.979 LINK nvme_manage 00:05:48.240 LINK accel_perf 00:05:48.240 LINK blobcli 00:05:48.501 LINK cuse 00:05:48.762 CC test/bdev/bdevio/bdevio.o 00:05:48.762 CC examples/bdev/hello_world/hello_bdev.o 00:05:48.762 CC examples/bdev/bdevperf/bdevperf.o 00:05:49.023 LINK hello_bdev 00:05:49.023 LINK bdevio 00:05:49.596 LINK bdevperf 00:05:50.169 CC examples/nvmf/nvmf/nvmf.o 00:05:50.431 LINK nvmf 00:05:51.817 LINK esnap 00:05:52.403 00:05:52.403 real 0m54.833s 00:05:52.403 user 8m5.896s 00:05:52.403 sys 5m34.364s 00:05:52.403 12:49:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:52.403 12:49:54 make -- common/autotest_common.sh@10 -- $ set +x 00:05:52.403 ************************************ 00:05:52.403 END TEST make 00:05:52.403 ************************************ 00:05:52.403 12:49:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:52.403 12:49:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:52.403 12:49:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:52.403 12:49:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:52.403 12:49:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:52.403 12:49:54 -- pm/common@44 -- $ pid=596514 00:05:52.403 12:49:54 -- pm/common@50 -- $ kill -TERM 596514 00:05:52.403 12:49:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:52.403 12:49:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:52.403 12:49:54 -- pm/common@44 -- $ pid=596515 00:05:52.403 12:49:54 -- pm/common@50 -- $ kill -TERM 596515 00:05:52.403 12:49:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:52.403 12:49:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:52.403 12:49:54 -- pm/common@44 -- $ pid=596517 00:05:52.403 12:49:54 -- pm/common@50 -- $ kill -TERM 596517 00:05:52.403 12:49:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:52.403 12:49:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:52.403 12:49:54 -- pm/common@44 -- $ pid=596540 00:05:52.403 12:49:54 -- pm/common@50 -- $ sudo -E kill -TERM 596540 00:05:52.403 12:49:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:52.403 12:49:54 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:52.403 12:49:54 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.403 12:49:54 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.403 12:49:54 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.403 12:49:55 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.403 12:49:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.403 12:49:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.403 12:49:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.403 12:49:55 -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.403 12:49:55 -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.403 12:49:55 -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.403 12:49:55 -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.403 12:49:55 -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.403 12:49:55 -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.403 12:49:55 -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.403 12:49:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.403 12:49:55 -- scripts/common.sh@344 -- # case "$op" in 00:05:52.403 12:49:55 -- scripts/common.sh@345 -- # : 1 00:05:52.403 12:49:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.403 12:49:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.403 12:49:55 -- scripts/common.sh@365 -- # decimal 1 00:05:52.403 12:49:55 -- scripts/common.sh@353 -- # local d=1 00:05:52.403 12:49:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.403 12:49:55 -- scripts/common.sh@355 -- # echo 1 00:05:52.403 12:49:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.403 12:49:55 -- scripts/common.sh@366 -- # decimal 2 00:05:52.403 12:49:55 -- scripts/common.sh@353 -- # local d=2 00:05:52.403 12:49:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.403 12:49:55 -- scripts/common.sh@355 -- # echo 2 00:05:52.403 12:49:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.403 12:49:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.403 12:49:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.403 12:49:55 -- scripts/common.sh@368 -- # return 0 00:05:52.403 12:49:55 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.403 12:49:55 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.403 --rc genhtml_branch_coverage=1 00:05:52.403 --rc genhtml_function_coverage=1 00:05:52.403 --rc genhtml_legend=1 00:05:52.403 --rc geninfo_all_blocks=1 00:05:52.403 --rc geninfo_unexecuted_blocks=1 00:05:52.403 00:05:52.403 ' 00:05:52.664 12:49:55 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.664 --rc genhtml_branch_coverage=1 00:05:52.664 --rc genhtml_function_coverage=1 00:05:52.664 --rc genhtml_legend=1 00:05:52.664 --rc geninfo_all_blocks=1 00:05:52.664 --rc geninfo_unexecuted_blocks=1 00:05:52.664 00:05:52.664 ' 00:05:52.664 12:49:55 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.664 --rc genhtml_branch_coverage=1 00:05:52.664 --rc genhtml_function_coverage=1 00:05:52.664 --rc genhtml_legend=1 00:05:52.664 --rc geninfo_all_blocks=1 00:05:52.664 --rc geninfo_unexecuted_blocks=1 00:05:52.664 00:05:52.664 ' 00:05:52.664 12:49:55 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.664 --rc genhtml_branch_coverage=1 00:05:52.664 --rc genhtml_function_coverage=1 00:05:52.664 --rc genhtml_legend=1 00:05:52.664 --rc geninfo_all_blocks=1 00:05:52.664 --rc geninfo_unexecuted_blocks=1 00:05:52.664 00:05:52.664 ' 00:05:52.664 12:49:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.664 12:49:55 -- nvmf/common.sh@7 -- # uname -s 00:05:52.664 12:49:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.664 12:49:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.664 12:49:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.664 12:49:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.664 12:49:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.664 12:49:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.664 12:49:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.664 12:49:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.664 12:49:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.664 12:49:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.664 12:49:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:52.664 12:49:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:52.664 12:49:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.664 12:49:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.664 12:49:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:52.664 12:49:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.664 12:49:55 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.664 12:49:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:52.664 12:49:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.664 12:49:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.664 12:49:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.664 12:49:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.664 12:49:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.664 12:49:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.664 12:49:55 -- paths/export.sh@5 -- # export PATH 00:05:52.664 12:49:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.664 12:49:55 -- nvmf/common.sh@51 -- # : 0 00:05:52.664 12:49:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:52.664 12:49:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:52.664 12:49:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.664 12:49:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.664 12:49:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.664 12:49:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:52.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:52.664 12:49:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:52.664 12:49:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:52.664 12:49:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:52.664 12:49:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:52.664 12:49:55 -- spdk/autotest.sh@32 -- # uname -s 00:05:52.664 12:49:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:52.664 12:49:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:52.664 12:49:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:52.664 12:49:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:52.664 12:49:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:52.664 12:49:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:52.664 12:49:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:52.664 12:49:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:52.664 12:49:55 -- spdk/autotest.sh@48 -- # udevadm_pid=661754 00:05:52.664 12:49:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:52.664 12:49:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:52.664 12:49:55 -- pm/common@17 -- # local monitor 00:05:52.664 12:49:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:52.664 12:49:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:52.664 12:49:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:52.664 12:49:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:52.664 12:49:55 -- pm/common@21 -- # date +%s 00:05:52.664 12:49:55 -- pm/common@25 -- # sleep 1 00:05:52.664 12:49:55 -- pm/common@21 -- # date +%s 00:05:52.664 12:49:55 -- pm/common@21 -- # date +%s 00:05:52.664 12:49:55 -- pm/common@21 -- # date +%s 00:05:52.664 12:49:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732880995 00:05:52.664 12:49:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732880995 00:05:52.664 12:49:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732880995 00:05:52.664 12:49:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732880995 00:05:52.664 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732880995_collect-cpu-load.pm.log 00:05:52.665 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732880995_collect-vmstat.pm.log 00:05:52.665 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732880995_collect-cpu-temp.pm.log 00:05:52.665 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732880995_collect-bmc-pm.bmc.pm.log 00:05:53.607 12:49:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:53.607 12:49:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:53.607 12:49:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.607 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:53.607 12:49:56 -- spdk/autotest.sh@59 -- # create_test_list 00:05:53.607 12:49:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:53.607 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:53.607 12:49:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:53.607 12:49:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:53.607 12:49:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:53.607 12:49:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:53.607 12:49:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:53.607 12:49:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:53.607 12:49:56 -- common/autotest_common.sh@1457 -- # uname 00:05:53.607 12:49:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:53.607 12:49:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:53.607 12:49:56 -- common/autotest_common.sh@1477 -- # uname 00:05:53.607 12:49:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:53.607 12:49:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:53.607 12:49:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:53.867 lcov: LCOV version 1.15 00:05:53.867 12:49:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:08.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:08.802 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:26.939 12:50:26 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:26.939 12:50:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.939 12:50:26 -- common/autotest_common.sh@10 -- # set +x 00:06:26.939 12:50:26 -- spdk/autotest.sh@78 -- # rm -f 00:06:26.939 12:50:26 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:27.510 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:06:27.510 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:06:27.510 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:06:27.510 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:06:27.510 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:06:27.510 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:06:27.771 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:06:27.771 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:06:27.771 0000:65:00.0 (144d a80a): Already using the nvme driver 00:06:27.771 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:06:27.771 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:06:27.771 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:06:27.771 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:06:27.771 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:06:27.771 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:06:27.771 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:06:28.032 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:06:28.324 12:50:30 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:28.324 12:50:30 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:28.324 12:50:30 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:28.324 12:50:30 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:28.324 12:50:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:28.324 12:50:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:28.324 12:50:30 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:28.324 12:50:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:28.324 12:50:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:28.324 12:50:30 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:28.324 12:50:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:28.324 12:50:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:28.324 12:50:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:28.324 12:50:30 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:28.324 12:50:30 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:28.324 No valid GPT data, bailing 00:06:28.324 12:50:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:28.324 12:50:30 -- scripts/common.sh@394 -- # pt= 00:06:28.324 12:50:30 -- scripts/common.sh@395 -- # return 1 00:06:28.324 12:50:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:28.324 1+0 records in 00:06:28.324 1+0 records out 00:06:28.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542559 s, 193 MB/s 00:06:28.324 12:50:30 -- spdk/autotest.sh@105 -- # sync 00:06:28.324 12:50:30 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:28.324 12:50:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:28.324 12:50:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:38.336 12:50:39 -- spdk/autotest.sh@111 -- # uname -s 00:06:38.336 12:50:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:38.336 12:50:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:38.336 12:50:39 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:40.254 Hugepages 00:06:40.254 node hugesize free / total 00:06:40.254 node0 1048576kB 0 / 0 00:06:40.254 node0 2048kB 0 / 0 00:06:40.254 node1 1048576kB 0 / 0 00:06:40.254 node1 2048kB 0 / 0 00:06:40.254 00:06:40.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:40.254 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:40.254 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:40.254 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:40.254 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:40.254 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:40.254 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:40.522 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:40.522 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:40.522 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:40.522 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:40.522 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:40.522 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:40.522 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:40.522 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:40.522 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:40.522 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:40.522 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:40.522 12:50:43 -- spdk/autotest.sh@117 -- # uname -s 00:06:40.522 12:50:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:40.522 12:50:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:40.522 12:50:43 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:43.931 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:43.931 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:43.931 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:43.931 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:43.931 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:44.193 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:46.110 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:46.371 12:50:48 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:47.315 12:50:49 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:47.315 12:50:49 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:47.315 12:50:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:47.315 12:50:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:47.315 12:50:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:47.315 12:50:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:47.315 12:50:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:47.315 12:50:49 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:47.315 12:50:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:47.315 12:50:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:47.315 12:50:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:06:47.315 12:50:49 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:51.521 Waiting for block devices as requested 00:06:51.521 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:51.521 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:51.521 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:51.521 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:51.521 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:51.521 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:51.521 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:51.521 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:51.521 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:51.814 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:51.814 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:51.814 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:52.074 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:52.074 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:52.074 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:52.074 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:52.336 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:52.599 12:50:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:52.599 12:50:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:52.599 12:50:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:52.599 12:50:55 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:06:52.599 12:50:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:52.599 12:50:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:52.599 12:50:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:52.599 12:50:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:52.599 12:50:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:52.599 12:50:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:52.599 12:50:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:52.599 12:50:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:52.599 12:50:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:52.599 12:50:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:06:52.599 12:50:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:52.599 12:50:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:52.599 12:50:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:52.599 12:50:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:52.599 12:50:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:52.599 12:50:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:52.599 12:50:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:52.599 12:50:55 -- common/autotest_common.sh@1543 -- # continue 00:06:52.599 12:50:55 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:52.599 12:50:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.599 12:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:52.599 12:50:55 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:52.599 12:50:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.599 12:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:52.599 12:50:55 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:56.863 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:56.863 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:56.863 12:50:59 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:56.863 12:50:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.863 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:06:56.863 12:50:59 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:56.863 12:50:59 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:56.863 12:50:59 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:56.863 12:50:59 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:56.863 12:50:59 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:56.863 12:50:59 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:56.863 12:50:59 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:56.863 12:50:59 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:56.863 12:50:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:56.863 12:50:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:56.863 12:50:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:56.863 12:50:59 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:56.863 12:50:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:56.863 12:50:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:56.863 12:50:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:06:56.863 12:50:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:56.863 12:50:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:56.863 12:50:59 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:06:56.863 12:50:59 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:56.863 12:50:59 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:56.863 12:50:59 -- common/autotest_common.sh@1572 -- # return 0 00:06:56.863 12:50:59 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:56.863 12:50:59 -- common/autotest_common.sh@1580 -- # return 0 00:06:56.863 12:50:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:56.863 12:50:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:56.863 12:50:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:56.863 12:50:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:56.863 12:50:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:56.863 12:50:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.863 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:06:56.863 12:50:59 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:56.863 12:50:59 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:56.863 12:50:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.863 12:50:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.863 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:06:56.863 ************************************ 00:06:56.863 START TEST env 00:06:56.863 ************************************ 00:06:56.863 12:50:59 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:57.125 * Looking for test storage... 00:06:57.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.125 12:50:59 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.125 12:50:59 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.125 12:50:59 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.125 12:50:59 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.125 12:50:59 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.125 12:50:59 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.125 12:50:59 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.125 12:50:59 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.125 12:50:59 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.125 12:50:59 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.125 12:50:59 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.125 12:50:59 env -- scripts/common.sh@344 -- # case "$op" in 00:06:57.125 12:50:59 env -- scripts/common.sh@345 -- # : 1 00:06:57.125 12:50:59 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.125 12:50:59 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.125 12:50:59 env -- scripts/common.sh@365 -- # decimal 1 00:06:57.125 12:50:59 env -- scripts/common.sh@353 -- # local d=1 00:06:57.125 12:50:59 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.125 12:50:59 env -- scripts/common.sh@355 -- # echo 1 00:06:57.125 12:50:59 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.125 12:50:59 env -- scripts/common.sh@366 -- # decimal 2 00:06:57.125 12:50:59 env -- scripts/common.sh@353 -- # local d=2 00:06:57.125 12:50:59 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.125 12:50:59 env -- scripts/common.sh@355 -- # echo 2 00:06:57.125 12:50:59 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.125 12:50:59 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.125 12:50:59 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.125 12:50:59 env -- scripts/common.sh@368 -- # return 0 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.125 --rc genhtml_branch_coverage=1 00:06:57.125 --rc genhtml_function_coverage=1 00:06:57.125 --rc genhtml_legend=1 00:06:57.125 --rc geninfo_all_blocks=1 00:06:57.125 --rc geninfo_unexecuted_blocks=1 00:06:57.125 00:06:57.125 ' 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.125 --rc genhtml_branch_coverage=1 00:06:57.125 --rc genhtml_function_coverage=1 00:06:57.125 --rc genhtml_legend=1 00:06:57.125 --rc geninfo_all_blocks=1 00:06:57.125 --rc geninfo_unexecuted_blocks=1 00:06:57.125 00:06:57.125 ' 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.125 --rc genhtml_branch_coverage=1 00:06:57.125 --rc genhtml_function_coverage=1 00:06:57.125 --rc genhtml_legend=1 00:06:57.125 --rc geninfo_all_blocks=1 00:06:57.125 --rc geninfo_unexecuted_blocks=1 00:06:57.125 00:06:57.125 ' 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.125 --rc genhtml_branch_coverage=1 00:06:57.125 --rc genhtml_function_coverage=1 00:06:57.125 --rc genhtml_legend=1 00:06:57.125 --rc geninfo_all_blocks=1 00:06:57.125 --rc geninfo_unexecuted_blocks=1 00:06:57.125 00:06:57.125 ' 00:06:57.125 12:50:59 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.125 12:50:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.125 12:50:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.125 ************************************ 00:06:57.125 START TEST env_memory 00:06:57.125 ************************************ 00:06:57.125 12:50:59 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:57.125 00:06:57.125 00:06:57.125 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.125 http://cunit.sourceforge.net/ 00:06:57.125 00:06:57.125 00:06:57.125 Suite: memory 00:06:57.387 Test: alloc and free memory map ...[2024-11-29 12:50:59.819387] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:57.387 passed 00:06:57.388 Test: mem map translation ...[2024-11-29 12:50:59.844991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:57.388 [2024-11-29 12:50:59.845018] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:57.388 [2024-11-29 12:50:59.845064] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:57.388 [2024-11-29 12:50:59.845071] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:57.388 passed 00:06:57.388 Test: mem map registration ...[2024-11-29 12:50:59.900271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:57.388 [2024-11-29 12:50:59.900292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:57.388 passed 00:06:57.388 Test: mem map adjacent registrations ...passed 00:06:57.388 00:06:57.388 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.388 suites 1 1 n/a 0 0 00:06:57.388 tests 4 4 4 0 0 00:06:57.388 asserts 152 152 152 0 n/a 00:06:57.388 00:06:57.388 Elapsed time = 0.192 seconds 00:06:57.388 00:06:57.388 real 0m0.207s 00:06:57.388 user 0m0.200s 00:06:57.388 sys 0m0.006s 00:06:57.388 12:50:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.388 12:50:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:57.388 ************************************ 00:06:57.388 END TEST env_memory 00:06:57.388 ************************************ 00:06:57.388 12:51:00 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:57.388 12:51:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.388 12:51:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.388 12:51:00 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.388 ************************************ 00:06:57.388 START TEST env_vtophys 00:06:57.388 ************************************ 00:06:57.388 12:51:00 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:57.650 EAL: lib.eal log level changed from notice to debug 00:06:57.650 EAL: Detected lcore 0 as core 0 on socket 0 00:06:57.650 EAL: Detected lcore 1 as core 1 on socket 0 00:06:57.650 EAL: Detected lcore 2 as core 2 on socket 0 00:06:57.650 EAL: Detected lcore 3 as core 3 on socket 0 00:06:57.650 EAL: Detected lcore 4 as core 4 on socket 0 00:06:57.650 EAL: Detected lcore 5 as core 5 on socket 0 00:06:57.650 EAL: Detected lcore 6 as core 6 on socket 0 00:06:57.650 EAL: Detected lcore 7 as core 7 on socket 0 00:06:57.650 EAL: Detected lcore 8 as core 8 on socket 0 00:06:57.650 EAL: Detected lcore 9 as core 9 on socket 0 00:06:57.650 EAL: Detected lcore 10 as core 10 on socket 0 00:06:57.650 EAL: Detected lcore 11 as core 11 on socket 0 00:06:57.650 EAL: Detected lcore 12 as core 12 on socket 0 00:06:57.650 EAL: Detected lcore 13 as core 13 on socket 0 00:06:57.650 EAL: Detected lcore 14 as core 14 on socket 0 00:06:57.650 EAL: Detected lcore 15 as core 15 on socket 0 00:06:57.650 EAL: Detected lcore 16 as core 16 on socket 0 00:06:57.650 EAL: Detected lcore 17 as core 17 on socket 0 00:06:57.650 EAL: Detected lcore 18 as core 18 on socket 0 00:06:57.650 EAL: Detected lcore 19 as core 19 on socket 0 00:06:57.650 EAL: Detected lcore 20 as core 20 on socket 0 00:06:57.650 EAL: Detected lcore 21 as core 21 on socket 0 00:06:57.650 EAL: Detected lcore 22 as core 22 on socket 0 00:06:57.650 EAL: Detected lcore 23 as core 23 on socket 0 00:06:57.650 EAL: Detected lcore 24 as core 24 on socket 0 00:06:57.650 EAL: Detected lcore 25 as core 25 on socket 0 00:06:57.650 EAL: Detected lcore 26 as core 26 on socket 0 00:06:57.650 EAL: Detected lcore 27 as core 27 on socket 0 00:06:57.650 EAL: Detected lcore 28 as core 28 on socket 0 00:06:57.650 EAL: Detected lcore 29 as core 29 on socket 0 00:06:57.650 EAL: Detected lcore 30 as core 30 on socket 0 00:06:57.650 EAL: Detected lcore 31 as core 31 on socket 0 00:06:57.650 EAL: Detected lcore 32 as core 32 on socket 0 00:06:57.650 EAL: Detected lcore 33 as core 33 on socket 0 00:06:57.650 EAL: Detected lcore 34 as core 34 on socket 0 00:06:57.650 EAL: Detected lcore 35 as core 35 on socket 0 00:06:57.650 EAL: Detected lcore 36 as core 0 on socket 1 00:06:57.650 EAL: Detected lcore 37 as core 1 on socket 1 00:06:57.650 EAL: Detected lcore 38 as core 2 on socket 1 00:06:57.650 EAL: Detected lcore 39 as core 3 on socket 1 00:06:57.650 EAL: Detected lcore 40 as core 4 on socket 1 00:06:57.650 EAL: Detected lcore 41 as core 5 on socket 1 00:06:57.650 EAL: Detected lcore 42 as core 6 on socket 1 00:06:57.650 EAL: Detected lcore 43 as core 7 on socket 1 00:06:57.650 EAL: Detected lcore 44 as core 8 on socket 1 00:06:57.650 EAL: Detected lcore 45 as core 9 on socket 1 00:06:57.650 EAL: Detected lcore 46 as core 10 on socket 1 00:06:57.650 EAL: Detected lcore 47 as core 11 on socket 1 00:06:57.650 EAL: Detected lcore 48 as core 12 on socket 1 00:06:57.650 EAL: Detected lcore 49 as core 13 on socket 1 00:06:57.650 EAL: Detected lcore 50 as core 14 on socket 1 00:06:57.650 EAL: Detected lcore 51 as core 15 on socket 1 00:06:57.650 EAL: Detected lcore 52 as core 16 on socket 1 00:06:57.650 EAL: Detected lcore 53 as core 17 on socket 1 00:06:57.650 EAL: Detected lcore 54 as core 18 on socket 1 00:06:57.650 EAL: Detected lcore 55 as core 19 on socket 1 00:06:57.650 EAL: Detected lcore 56 as core 20 on socket 1 00:06:57.650 EAL: Detected lcore 57 as core 21 on socket 1 00:06:57.650 EAL: Detected lcore 58 as core 22 on socket 1 00:06:57.650 EAL: Detected lcore 59 as core 23 on socket 1 00:06:57.650 EAL: Detected lcore 60 as core 24 on socket 1 00:06:57.650 EAL: Detected lcore 61 as core 25 on socket 1 00:06:57.650 EAL: Detected lcore 62 as core 26 on socket 1 00:06:57.650 EAL: Detected lcore 63 as core 27 on socket 1 00:06:57.650 EAL: Detected lcore 64 as core 28 on socket 1 00:06:57.650 EAL: Detected lcore 65 as core 29 on socket 1 00:06:57.650 EAL: Detected lcore 66 as core 30 on socket 1 00:06:57.650 EAL: Detected lcore 67 as core 31 on socket 1 00:06:57.650 EAL: Detected lcore 68 as core 32 on socket 1 00:06:57.650 EAL: Detected lcore 69 as core 33 on socket 1 00:06:57.650 EAL: Detected lcore 70 as core 34 on socket 1 00:06:57.650 EAL: Detected lcore 71 as core 35 on socket 1 00:06:57.650 EAL: Detected lcore 72 as core 0 on socket 0 00:06:57.650 EAL: Detected lcore 73 as core 1 on socket 0 00:06:57.650 EAL: Detected lcore 74 as core 2 on socket 0 00:06:57.650 EAL: Detected lcore 75 as core 3 on socket 0 00:06:57.650 EAL: Detected lcore 76 as core 4 on socket 0 00:06:57.650 EAL: Detected lcore 77 as core 5 on socket 0 00:06:57.650 EAL: Detected lcore 78 as core 6 on socket 0 00:06:57.650 EAL: Detected lcore 79 as core 7 on socket 0 00:06:57.650 EAL: Detected lcore 80 as core 8 on socket 0 00:06:57.650 EAL: Detected lcore 81 as core 9 on socket 0 00:06:57.650 EAL: Detected lcore 82 as core 10 on socket 0 00:06:57.650 EAL: Detected lcore 83 as core 11 on socket 0 00:06:57.650 EAL: Detected lcore 84 as core 12 on socket 0 00:06:57.650 EAL: Detected lcore 85 as core 13 on socket 0 00:06:57.650 EAL: Detected lcore 86 as core 14 on socket 0 00:06:57.650 EAL: Detected lcore 87 as core 15 on socket 0 00:06:57.650 EAL: Detected lcore 88 as core 16 on socket 0 00:06:57.650 EAL: Detected lcore 89 as core 17 on socket 0 00:06:57.650 EAL: Detected lcore 90 as core 18 on socket 0 00:06:57.650 EAL: Detected lcore 91 as core 19 on socket 0 00:06:57.650 EAL: Detected lcore 92 as core 20 on socket 0 00:06:57.650 EAL: Detected lcore 93 as core 21 on socket 0 00:06:57.650 EAL: Detected lcore 94 as core 22 on socket 0 00:06:57.650 EAL: Detected lcore 95 as core 23 on socket 0 00:06:57.650 EAL: Detected lcore 96 as core 24 on socket 0 00:06:57.650 EAL: Detected lcore 97 as core 25 on socket 0 00:06:57.650 EAL: Detected lcore 98 as core 26 on socket 0 00:06:57.650 EAL: Detected lcore 99 as core 27 on socket 0 00:06:57.650 EAL: Detected lcore 100 as core 28 on socket 0 00:06:57.650 EAL: Detected lcore 101 as core 29 on socket 0 00:06:57.650 EAL: Detected lcore 102 as core 30 on socket 0 00:06:57.650 EAL: Detected lcore 103 as core 31 on socket 0 00:06:57.650 EAL: Detected lcore 104 as core 32 on socket 0 00:06:57.650 EAL: Detected lcore 105 as core 33 on socket 0 00:06:57.650 EAL: Detected lcore 106 as core 34 on socket 0 00:06:57.650 EAL: Detected lcore 107 as core 35 on socket 0 00:06:57.650 EAL: Detected lcore 108 as core 0 on socket 1 00:06:57.650 EAL: Detected lcore 109 as core 1 on socket 1 00:06:57.650 EAL: Detected lcore 110 as core 2 on socket 1 00:06:57.650 EAL: Detected lcore 111 as core 3 on socket 1 00:06:57.650 EAL: Detected lcore 112 as core 4 on socket 1 00:06:57.650 EAL: Detected lcore 113 as core 5 on socket 1 00:06:57.650 EAL: Detected lcore 114 as core 6 on socket 1 00:06:57.650 EAL: Detected lcore 115 as core 7 on socket 1 00:06:57.650 EAL: Detected lcore 116 as core 8 on socket 1 00:06:57.650 EAL: Detected lcore 117 as core 9 on socket 1 00:06:57.650 EAL: Detected lcore 118 as core 10 on socket 1 00:06:57.650 EAL: Detected lcore 119 as core 11 on socket 1 00:06:57.650 EAL: Detected lcore 120 as core 12 on socket 1 00:06:57.650 EAL: Detected lcore 121 as core 13 on socket 1 00:06:57.650 EAL: Detected lcore 122 as core 14 on socket 1 00:06:57.650 EAL: Detected lcore 123 as core 15 on socket 1 00:06:57.650 EAL: Detected lcore 124 as core 16 on socket 1 00:06:57.650 EAL: Detected lcore 125 as core 17 on socket 1 00:06:57.650 EAL: Detected lcore 126 as core 18 on socket 1 00:06:57.650 EAL: Detected lcore 127 as core 19 on socket 1 00:06:57.650 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:57.650 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:57.650 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:57.650 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:57.650 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:57.650 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:57.650 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:57.650 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:57.650 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:57.650 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:57.650 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:57.650 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:57.650 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:57.650 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:57.650 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:57.650 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:57.650 EAL: Maximum logical cores by configuration: 128 00:06:57.650 EAL: Detected CPU lcores: 128 00:06:57.650 EAL: Detected NUMA nodes: 2 00:06:57.650 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:57.650 EAL: Detected shared linkage of DPDK 00:06:57.650 EAL: No shared files mode enabled, IPC will be disabled 00:06:57.650 EAL: Bus pci wants IOVA as 'DC' 00:06:57.650 EAL: Buses did not request a specific IOVA mode. 00:06:57.650 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:57.650 EAL: Selected IOVA mode 'VA' 00:06:57.650 EAL: Probing VFIO support... 00:06:57.650 EAL: IOMMU type 1 (Type 1) is supported 00:06:57.650 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:57.650 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:57.650 EAL: VFIO support initialized 00:06:57.650 EAL: Ask a virtual area of 0x2e000 bytes 00:06:57.650 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:57.650 EAL: Setting up physically contiguous memory... 00:06:57.650 EAL: Setting maximum number of open files to 524288 00:06:57.650 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:57.650 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:57.650 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:57.650 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.650 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:57.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.650 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.650 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:57.650 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:57.650 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.650 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:57.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.650 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.651 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:57.651 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:57.651 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.651 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:57.651 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.651 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.651 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:57.651 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:57.651 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.651 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:57.651 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.651 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.651 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:57.651 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:57.651 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:57.651 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.651 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:57.651 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.651 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.651 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:57.651 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:57.651 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.651 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:57.651 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.651 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.651 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:57.651 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:57.651 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.651 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:57.651 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.651 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.651 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:57.651 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:57.651 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.651 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:57.651 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.651 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.651 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:57.651 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:57.651 EAL: Hugepages will be freed exactly as allocated. 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: TSC frequency is ~2400000 KHz 00:06:57.651 EAL: Main lcore 0 is ready (tid=7f9970d70a00;cpuset=[0]) 00:06:57.651 EAL: Trying to obtain current memory policy. 00:06:57.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.651 EAL: Restoring previous memory policy: 0 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was expanded by 2MB 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:57.651 EAL: Mem event callback 'spdk:(nil)' registered 00:06:57.651 00:06:57.651 00:06:57.651 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.651 http://cunit.sourceforge.net/ 00:06:57.651 00:06:57.651 00:06:57.651 Suite: components_suite 00:06:57.651 Test: vtophys_malloc_test ...passed 00:06:57.651 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:57.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.651 EAL: Restoring previous memory policy: 4 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was expanded by 4MB 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was shrunk by 4MB 00:06:57.651 EAL: Trying to obtain current memory policy. 00:06:57.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.651 EAL: Restoring previous memory policy: 4 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was expanded by 6MB 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was shrunk by 6MB 00:06:57.651 EAL: Trying to obtain current memory policy. 00:06:57.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.651 EAL: Restoring previous memory policy: 4 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was expanded by 10MB 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was shrunk by 10MB 00:06:57.651 EAL: Trying to obtain current memory policy. 00:06:57.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.651 EAL: Restoring previous memory policy: 4 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was expanded by 18MB 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was shrunk by 18MB 00:06:57.651 EAL: Trying to obtain current memory policy. 00:06:57.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.651 EAL: Restoring previous memory policy: 4 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was expanded by 34MB 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was shrunk by 34MB 00:06:57.651 EAL: Trying to obtain current memory policy. 00:06:57.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.651 EAL: Restoring previous memory policy: 4 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was expanded by 66MB 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was shrunk by 66MB 00:06:57.651 EAL: Trying to obtain current memory policy. 00:06:57.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.651 EAL: Restoring previous memory policy: 4 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was expanded by 130MB 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was shrunk by 130MB 00:06:57.651 EAL: Trying to obtain current memory policy. 00:06:57.651 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.651 EAL: Restoring previous memory policy: 4 00:06:57.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.651 EAL: request: mp_malloc_sync 00:06:57.651 EAL: No shared files mode enabled, IPC is disabled 00:06:57.651 EAL: Heap on socket 0 was expanded by 258MB 00:06:57.912 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.912 EAL: request: mp_malloc_sync 00:06:57.912 EAL: No shared files mode enabled, IPC is disabled 00:06:57.912 EAL: Heap on socket 0 was shrunk by 258MB 00:06:57.912 EAL: Trying to obtain current memory policy. 00:06:57.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.912 EAL: Restoring previous memory policy: 4 00:06:57.912 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.912 EAL: request: mp_malloc_sync 00:06:57.912 EAL: No shared files mode enabled, IPC is disabled 00:06:57.912 EAL: Heap on socket 0 was expanded by 514MB 00:06:57.912 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.912 EAL: request: mp_malloc_sync 00:06:57.912 EAL: No shared files mode enabled, IPC is disabled 00:06:57.912 EAL: Heap on socket 0 was shrunk by 514MB 00:06:57.912 EAL: Trying to obtain current memory policy. 00:06:57.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.173 EAL: Restoring previous memory policy: 4 00:06:58.173 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.173 EAL: request: mp_malloc_sync 00:06:58.173 EAL: No shared files mode enabled, IPC is disabled 00:06:58.173 EAL: Heap on socket 0 was expanded by 1026MB 00:06:58.173 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.435 EAL: request: mp_malloc_sync 00:06:58.435 EAL: No shared files mode enabled, IPC is disabled 00:06:58.435 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:58.435 passed 00:06:58.435 00:06:58.435 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.435 suites 1 1 n/a 0 0 00:06:58.435 tests 2 2 2 0 0 00:06:58.435 asserts 497 497 497 0 n/a 00:06:58.435 00:06:58.435 Elapsed time = 0.690 seconds 00:06:58.435 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.435 EAL: request: mp_malloc_sync 00:06:58.435 EAL: No shared files mode enabled, IPC is disabled 00:06:58.435 EAL: Heap on socket 0 was shrunk by 2MB 00:06:58.435 EAL: No shared files mode enabled, IPC is disabled 00:06:58.435 EAL: No shared files mode enabled, IPC is disabled 00:06:58.435 EAL: No shared files mode enabled, IPC is disabled 00:06:58.435 00:06:58.435 real 0m0.853s 00:06:58.435 user 0m0.445s 00:06:58.435 sys 0m0.370s 00:06:58.435 12:51:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.435 12:51:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:58.435 ************************************ 00:06:58.435 END TEST env_vtophys 00:06:58.435 ************************************ 00:06:58.435 12:51:00 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:58.435 12:51:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.435 12:51:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.435 12:51:00 env -- common/autotest_common.sh@10 -- # set +x 00:06:58.435 ************************************ 00:06:58.435 START TEST env_pci 00:06:58.435 ************************************ 00:06:58.435 12:51:00 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:58.435 00:06:58.435 00:06:58.435 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.435 http://cunit.sourceforge.net/ 00:06:58.435 00:06:58.435 00:06:58.435 Suite: pci 00:06:58.435 Test: pci_hook ...[2024-11-29 12:51:01.010336] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 681241 has claimed it 00:06:58.435 EAL: Cannot find device (10000:00:01.0) 00:06:58.435 EAL: Failed to attach device on primary process 00:06:58.435 passed 00:06:58.435 00:06:58.435 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.435 suites 1 1 n/a 0 0 00:06:58.435 tests 1 1 1 0 0 00:06:58.435 asserts 25 25 25 0 n/a 00:06:58.435 00:06:58.435 Elapsed time = 0.032 seconds 00:06:58.435 00:06:58.435 real 0m0.055s 00:06:58.435 user 0m0.018s 00:06:58.435 sys 0m0.036s 00:06:58.435 12:51:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.435 12:51:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:58.435 ************************************ 00:06:58.435 END TEST env_pci 00:06:58.435 ************************************ 00:06:58.435 12:51:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:58.435 12:51:01 env -- env/env.sh@15 -- # uname 00:06:58.435 12:51:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:58.435 12:51:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:58.435 12:51:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:58.435 12:51:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:58.435 12:51:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.435 12:51:01 env -- common/autotest_common.sh@10 -- # set +x 00:06:58.696 ************************************ 00:06:58.696 START TEST env_dpdk_post_init 00:06:58.696 ************************************ 00:06:58.696 12:51:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:58.696 EAL: Detected CPU lcores: 128 00:06:58.696 EAL: Detected NUMA nodes: 2 00:06:58.696 EAL: Detected shared linkage of DPDK 00:06:58.696 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:58.696 EAL: Selected IOVA mode 'VA' 00:06:58.696 EAL: VFIO support initialized 00:06:58.696 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:58.696 EAL: Using IOMMU type 1 (Type 1) 00:06:58.957 EAL: Ignore mapping IO port bar(1) 00:06:58.957 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:58.957 EAL: Ignore mapping IO port bar(1) 00:06:59.219 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:59.219 EAL: Ignore mapping IO port bar(1) 00:06:59.480 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:59.480 EAL: Ignore mapping IO port bar(1) 00:06:59.740 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:59.740 EAL: Ignore mapping IO port bar(1) 00:06:59.740 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:07:00.001 EAL: Ignore mapping IO port bar(1) 00:07:00.001 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:07:00.262 EAL: Ignore mapping IO port bar(1) 00:07:00.262 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:07:00.523 EAL: Ignore mapping IO port bar(1) 00:07:00.523 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:07:00.783 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:07:00.783 EAL: Ignore mapping IO port bar(1) 00:07:01.044 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:07:01.044 EAL: Ignore mapping IO port bar(1) 00:07:01.305 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:07:01.305 EAL: Ignore mapping IO port bar(1) 00:07:01.305 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:07:01.566 EAL: Ignore mapping IO port bar(1) 00:07:01.566 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:07:01.827 EAL: Ignore mapping IO port bar(1) 00:07:01.827 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:07:02.088 EAL: Ignore mapping IO port bar(1) 00:07:02.088 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:07:02.088 EAL: Ignore mapping IO port bar(1) 00:07:02.350 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:07:02.350 EAL: Ignore mapping IO port bar(1) 00:07:02.610 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:07:02.610 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:07:02.610 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:07:02.610 Starting DPDK initialization... 00:07:02.610 Starting SPDK post initialization... 00:07:02.610 SPDK NVMe probe 00:07:02.610 Attaching to 0000:65:00.0 00:07:02.610 Attached to 0000:65:00.0 00:07:02.610 Cleaning up... 00:07:04.523 00:07:04.524 real 0m5.746s 00:07:04.524 user 0m0.111s 00:07:04.524 sys 0m0.191s 00:07:04.524 12:51:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.524 12:51:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:04.524 ************************************ 00:07:04.524 END TEST env_dpdk_post_init 00:07:04.524 ************************************ 00:07:04.524 12:51:06 env -- env/env.sh@26 -- # uname 00:07:04.524 12:51:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:04.524 12:51:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:04.524 12:51:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.524 12:51:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.524 12:51:06 env -- common/autotest_common.sh@10 -- # set +x 00:07:04.524 ************************************ 00:07:04.524 START TEST env_mem_callbacks 00:07:04.524 ************************************ 00:07:04.524 12:51:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:04.524 EAL: Detected CPU lcores: 128 00:07:04.524 EAL: Detected NUMA nodes: 2 00:07:04.524 EAL: Detected shared linkage of DPDK 00:07:04.524 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:04.524 EAL: Selected IOVA mode 'VA' 00:07:04.524 EAL: VFIO support initialized 00:07:04.524 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:04.524 00:07:04.524 00:07:04.524 CUnit - A unit testing framework for C - Version 2.1-3 00:07:04.524 http://cunit.sourceforge.net/ 00:07:04.524 00:07:04.524 00:07:04.524 Suite: memory 00:07:04.524 Test: test ... 00:07:04.524 register 0x200000200000 2097152 00:07:04.524 malloc 3145728 00:07:04.524 register 0x200000400000 4194304 00:07:04.524 buf 0x200000500000 len 3145728 PASSED 00:07:04.524 malloc 64 00:07:04.524 buf 0x2000004fff40 len 64 PASSED 00:07:04.524 malloc 4194304 00:07:04.524 register 0x200000800000 6291456 00:07:04.524 buf 0x200000a00000 len 4194304 PASSED 00:07:04.524 free 0x200000500000 3145728 00:07:04.524 free 0x2000004fff40 64 00:07:04.524 unregister 0x200000400000 4194304 PASSED 00:07:04.524 free 0x200000a00000 4194304 00:07:04.524 unregister 0x200000800000 6291456 PASSED 00:07:04.524 malloc 8388608 00:07:04.524 register 0x200000400000 10485760 00:07:04.524 buf 0x200000600000 len 8388608 PASSED 00:07:04.524 free 0x200000600000 8388608 00:07:04.524 unregister 0x200000400000 10485760 PASSED 00:07:04.524 passed 00:07:04.524 00:07:04.524 Run Summary: Type Total Ran Passed Failed Inactive 00:07:04.524 suites 1 1 n/a 0 0 00:07:04.524 tests 1 1 1 0 0 00:07:04.524 asserts 15 15 15 0 n/a 00:07:04.524 00:07:04.524 Elapsed time = 0.010 seconds 00:07:04.524 00:07:04.524 real 0m0.069s 00:07:04.524 user 0m0.021s 00:07:04.524 sys 0m0.047s 00:07:04.524 12:51:07 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.524 12:51:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:04.524 ************************************ 00:07:04.524 END TEST env_mem_callbacks 00:07:04.524 ************************************ 00:07:04.524 00:07:04.524 real 0m7.554s 00:07:04.524 user 0m1.069s 00:07:04.524 sys 0m1.038s 00:07:04.524 12:51:07 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.524 12:51:07 env -- common/autotest_common.sh@10 -- # set +x 00:07:04.524 ************************************ 00:07:04.524 END TEST env 00:07:04.524 ************************************ 00:07:04.524 12:51:07 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:04.524 12:51:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.524 12:51:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.524 12:51:07 -- common/autotest_common.sh@10 -- # set +x 00:07:04.524 ************************************ 00:07:04.524 START TEST rpc 00:07:04.524 ************************************ 00:07:04.524 12:51:07 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:04.785 * Looking for test storage... 00:07:04.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.785 12:51:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.785 12:51:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.785 12:51:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.785 12:51:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.785 12:51:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.785 12:51:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.785 12:51:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.785 12:51:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.785 12:51:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.785 12:51:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.785 12:51:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.785 12:51:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:04.785 12:51:07 rpc -- scripts/common.sh@345 -- # : 1 00:07:04.785 12:51:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.785 12:51:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.785 12:51:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:04.785 12:51:07 rpc -- scripts/common.sh@353 -- # local d=1 00:07:04.785 12:51:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.785 12:51:07 rpc -- scripts/common.sh@355 -- # echo 1 00:07:04.785 12:51:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.785 12:51:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:04.785 12:51:07 rpc -- scripts/common.sh@353 -- # local d=2 00:07:04.785 12:51:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.785 12:51:07 rpc -- scripts/common.sh@355 -- # echo 2 00:07:04.785 12:51:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.785 12:51:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.785 12:51:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.785 12:51:07 rpc -- scripts/common.sh@368 -- # return 0 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.785 --rc genhtml_branch_coverage=1 00:07:04.785 --rc genhtml_function_coverage=1 00:07:04.785 --rc genhtml_legend=1 00:07:04.785 --rc geninfo_all_blocks=1 00:07:04.785 --rc geninfo_unexecuted_blocks=1 00:07:04.785 00:07:04.785 ' 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.785 --rc genhtml_branch_coverage=1 00:07:04.785 --rc genhtml_function_coverage=1 00:07:04.785 --rc genhtml_legend=1 00:07:04.785 --rc geninfo_all_blocks=1 00:07:04.785 --rc geninfo_unexecuted_blocks=1 00:07:04.785 00:07:04.785 ' 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.785 --rc genhtml_branch_coverage=1 00:07:04.785 --rc genhtml_function_coverage=1 00:07:04.785 --rc genhtml_legend=1 00:07:04.785 --rc geninfo_all_blocks=1 00:07:04.785 --rc geninfo_unexecuted_blocks=1 00:07:04.785 00:07:04.785 ' 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.785 --rc genhtml_branch_coverage=1 00:07:04.785 --rc genhtml_function_coverage=1 00:07:04.785 --rc genhtml_legend=1 00:07:04.785 --rc geninfo_all_blocks=1 00:07:04.785 --rc geninfo_unexecuted_blocks=1 00:07:04.785 00:07:04.785 ' 00:07:04.785 12:51:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=682598 00:07:04.785 12:51:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.785 12:51:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 682598 00:07:04.785 12:51:07 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@835 -- # '[' -z 682598 ']' 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.785 12:51:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.785 [2024-11-29 12:51:07.442493] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:04.785 [2024-11-29 12:51:07.442564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682598 ] 00:07:05.046 [2024-11-29 12:51:07.535066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.046 [2024-11-29 12:51:07.587047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:05.046 [2024-11-29 12:51:07.587100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 682598' to capture a snapshot of events at runtime. 00:07:05.046 [2024-11-29 12:51:07.587108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.046 [2024-11-29 12:51:07.587116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.046 [2024-11-29 12:51:07.587122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid682598 for offline analysis/debug. 00:07:05.046 [2024-11-29 12:51:07.587877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.620 12:51:08 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.620 12:51:08 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.620 12:51:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:05.620 12:51:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:05.620 12:51:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:05.620 12:51:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:05.620 12:51:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.620 12:51:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.620 12:51:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.620 ************************************ 00:07:05.620 START TEST rpc_integrity 00:07:05.620 ************************************ 00:07:05.620 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:05.620 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:05.620 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.620 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.620 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.620 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:05.620 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:05.881 { 00:07:05.881 "name": "Malloc0", 00:07:05.881 "aliases": [ 00:07:05.881 "5c981fc0-f97d-4e07-9eb2-691e9ff5140a" 00:07:05.881 ], 00:07:05.881 "product_name": "Malloc disk", 00:07:05.881 "block_size": 512, 00:07:05.881 "num_blocks": 16384, 00:07:05.881 "uuid": "5c981fc0-f97d-4e07-9eb2-691e9ff5140a", 00:07:05.881 "assigned_rate_limits": { 00:07:05.881 "rw_ios_per_sec": 0, 00:07:05.881 "rw_mbytes_per_sec": 0, 00:07:05.881 "r_mbytes_per_sec": 0, 00:07:05.881 "w_mbytes_per_sec": 0 00:07:05.881 }, 00:07:05.881 "claimed": false, 00:07:05.881 "zoned": false, 00:07:05.881 "supported_io_types": { 00:07:05.881 "read": true, 00:07:05.881 "write": true, 00:07:05.881 "unmap": true, 00:07:05.881 "flush": true, 00:07:05.881 "reset": true, 00:07:05.881 "nvme_admin": false, 00:07:05.881 "nvme_io": false, 00:07:05.881 "nvme_io_md": false, 00:07:05.881 "write_zeroes": true, 00:07:05.881 "zcopy": true, 00:07:05.881 "get_zone_info": false, 00:07:05.881 "zone_management": false, 00:07:05.881 "zone_append": false, 00:07:05.881 "compare": false, 00:07:05.881 "compare_and_write": false, 00:07:05.881 "abort": true, 00:07:05.881 "seek_hole": false, 00:07:05.881 "seek_data": false, 00:07:05.881 "copy": true, 00:07:05.881 "nvme_iov_md": false 00:07:05.881 }, 00:07:05.881 "memory_domains": [ 00:07:05.881 { 00:07:05.881 "dma_device_id": "system", 00:07:05.881 "dma_device_type": 1 00:07:05.881 }, 00:07:05.881 { 00:07:05.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.881 "dma_device_type": 2 00:07:05.881 } 00:07:05.881 ], 00:07:05.881 "driver_specific": {} 00:07:05.881 } 00:07:05.881 ]' 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.881 [2024-11-29 12:51:08.401583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:05.881 [2024-11-29 12:51:08.401629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.881 [2024-11-29 12:51:08.401645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24bd800 00:07:05.881 [2024-11-29 12:51:08.401653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.881 [2024-11-29 12:51:08.403236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.881 [2024-11-29 12:51:08.403274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:05.881 Passthru0 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.881 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.881 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:05.881 { 00:07:05.881 "name": "Malloc0", 00:07:05.881 "aliases": [ 00:07:05.881 "5c981fc0-f97d-4e07-9eb2-691e9ff5140a" 00:07:05.881 ], 00:07:05.881 "product_name": "Malloc disk", 00:07:05.881 "block_size": 512, 00:07:05.881 "num_blocks": 16384, 00:07:05.881 "uuid": "5c981fc0-f97d-4e07-9eb2-691e9ff5140a", 00:07:05.881 "assigned_rate_limits": { 00:07:05.881 "rw_ios_per_sec": 0, 00:07:05.881 "rw_mbytes_per_sec": 0, 00:07:05.881 "r_mbytes_per_sec": 0, 00:07:05.882 "w_mbytes_per_sec": 0 00:07:05.882 }, 00:07:05.882 "claimed": true, 00:07:05.882 "claim_type": "exclusive_write", 00:07:05.882 "zoned": false, 00:07:05.882 "supported_io_types": { 00:07:05.882 "read": true, 00:07:05.882 "write": true, 00:07:05.882 "unmap": true, 00:07:05.882 "flush": true, 00:07:05.882 "reset": true, 00:07:05.882 "nvme_admin": false, 00:07:05.882 "nvme_io": false, 00:07:05.882 "nvme_io_md": false, 00:07:05.882 "write_zeroes": true, 00:07:05.882 "zcopy": true, 00:07:05.882 "get_zone_info": false, 00:07:05.882 "zone_management": false, 00:07:05.882 "zone_append": false, 00:07:05.882 "compare": false, 00:07:05.882 "compare_and_write": false, 00:07:05.882 "abort": true, 00:07:05.882 "seek_hole": false, 00:07:05.882 "seek_data": false, 00:07:05.882 "copy": true, 00:07:05.882 "nvme_iov_md": false 00:07:05.882 }, 00:07:05.882 "memory_domains": [ 00:07:05.882 { 00:07:05.882 "dma_device_id": "system", 00:07:05.882 "dma_device_type": 1 00:07:05.882 }, 00:07:05.882 { 00:07:05.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.882 "dma_device_type": 2 00:07:05.882 } 00:07:05.882 ], 00:07:05.882 "driver_specific": {} 00:07:05.882 }, 00:07:05.882 { 00:07:05.882 "name": "Passthru0", 00:07:05.882 "aliases": [ 00:07:05.882 "0b079dd5-5bb8-5deb-949f-5e5c7c4995ab" 00:07:05.882 ], 00:07:05.882 "product_name": "passthru", 00:07:05.882 "block_size": 512, 00:07:05.882 "num_blocks": 16384, 00:07:05.882 "uuid": "0b079dd5-5bb8-5deb-949f-5e5c7c4995ab", 00:07:05.882 "assigned_rate_limits": { 00:07:05.882 "rw_ios_per_sec": 0, 00:07:05.882 "rw_mbytes_per_sec": 0, 00:07:05.882 "r_mbytes_per_sec": 0, 00:07:05.882 "w_mbytes_per_sec": 0 00:07:05.882 }, 00:07:05.882 "claimed": false, 00:07:05.882 "zoned": false, 00:07:05.882 "supported_io_types": { 00:07:05.882 "read": true, 00:07:05.882 "write": true, 00:07:05.882 "unmap": true, 00:07:05.882 "flush": true, 00:07:05.882 "reset": true, 00:07:05.882 "nvme_admin": false, 00:07:05.882 "nvme_io": false, 00:07:05.882 "nvme_io_md": false, 00:07:05.882 "write_zeroes": true, 00:07:05.882 "zcopy": true, 00:07:05.882 "get_zone_info": false, 00:07:05.882 "zone_management": false, 00:07:05.882 "zone_append": false, 00:07:05.882 "compare": false, 00:07:05.882 "compare_and_write": false, 00:07:05.882 "abort": true, 00:07:05.882 "seek_hole": false, 00:07:05.882 "seek_data": false, 00:07:05.882 "copy": true, 00:07:05.882 "nvme_iov_md": false 00:07:05.882 }, 00:07:05.882 "memory_domains": [ 00:07:05.882 { 00:07:05.882 "dma_device_id": "system", 00:07:05.882 "dma_device_type": 1 00:07:05.882 }, 00:07:05.882 { 00:07:05.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.882 "dma_device_type": 2 00:07:05.882 } 00:07:05.882 ], 00:07:05.882 "driver_specific": { 00:07:05.882 "passthru": { 00:07:05.882 "name": "Passthru0", 00:07:05.882 "base_bdev_name": "Malloc0" 00:07:05.882 } 00:07:05.882 } 00:07:05.882 } 00:07:05.882 ]' 00:07:05.882 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:05.882 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:05.882 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:05.882 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.882 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.882 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.882 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:05.882 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.882 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.882 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.882 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:05.882 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.882 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.882 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.882 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:05.882 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:06.143 12:51:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:06.143 00:07:06.143 real 0m0.295s 00:07:06.143 user 0m0.180s 00:07:06.143 sys 0m0.047s 00:07:06.143 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.143 12:51:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 ************************************ 00:07:06.143 END TEST rpc_integrity 00:07:06.143 ************************************ 00:07:06.143 12:51:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:06.143 12:51:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.143 12:51:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.143 12:51:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 ************************************ 00:07:06.143 START TEST rpc_plugins 00:07:06.143 ************************************ 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:06.143 { 00:07:06.143 "name": "Malloc1", 00:07:06.143 "aliases": [ 00:07:06.143 "8728b179-ccf0-4a74-a232-096a93a8610b" 00:07:06.143 ], 00:07:06.143 "product_name": "Malloc disk", 00:07:06.143 "block_size": 4096, 00:07:06.143 "num_blocks": 256, 00:07:06.143 "uuid": "8728b179-ccf0-4a74-a232-096a93a8610b", 00:07:06.143 "assigned_rate_limits": { 00:07:06.143 "rw_ios_per_sec": 0, 00:07:06.143 "rw_mbytes_per_sec": 0, 00:07:06.143 "r_mbytes_per_sec": 0, 00:07:06.143 "w_mbytes_per_sec": 0 00:07:06.143 }, 00:07:06.143 "claimed": false, 00:07:06.143 "zoned": false, 00:07:06.143 "supported_io_types": { 00:07:06.143 "read": true, 00:07:06.143 "write": true, 00:07:06.143 "unmap": true, 00:07:06.143 "flush": true, 00:07:06.143 "reset": true, 00:07:06.143 "nvme_admin": false, 00:07:06.143 "nvme_io": false, 00:07:06.143 "nvme_io_md": false, 00:07:06.143 "write_zeroes": true, 00:07:06.143 "zcopy": true, 00:07:06.143 "get_zone_info": false, 00:07:06.143 "zone_management": false, 00:07:06.143 "zone_append": false, 00:07:06.143 "compare": false, 00:07:06.143 "compare_and_write": false, 00:07:06.143 "abort": true, 00:07:06.143 "seek_hole": false, 00:07:06.143 "seek_data": false, 00:07:06.143 "copy": true, 00:07:06.143 "nvme_iov_md": false 00:07:06.143 }, 00:07:06.143 "memory_domains": [ 00:07:06.143 { 00:07:06.143 "dma_device_id": "system", 00:07:06.143 "dma_device_type": 1 00:07:06.143 }, 00:07:06.143 { 00:07:06.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.143 "dma_device_type": 2 00:07:06.143 } 00:07:06.143 ], 00:07:06.143 "driver_specific": {} 00:07:06.143 } 00:07:06.143 ]' 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:06.143 12:51:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:06.143 00:07:06.143 real 0m0.153s 00:07:06.143 user 0m0.096s 00:07:06.143 sys 0m0.021s 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.143 12:51:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 ************************************ 00:07:06.143 END TEST rpc_plugins 00:07:06.143 ************************************ 00:07:06.405 12:51:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:06.405 12:51:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.405 12:51:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.405 12:51:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.405 ************************************ 00:07:06.405 START TEST rpc_trace_cmd_test 00:07:06.405 ************************************ 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:06.405 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid682598", 00:07:06.405 "tpoint_group_mask": "0x8", 00:07:06.405 "iscsi_conn": { 00:07:06.405 "mask": "0x2", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "scsi": { 00:07:06.405 "mask": "0x4", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "bdev": { 00:07:06.405 "mask": "0x8", 00:07:06.405 "tpoint_mask": "0xffffffffffffffff" 00:07:06.405 }, 00:07:06.405 "nvmf_rdma": { 00:07:06.405 "mask": "0x10", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "nvmf_tcp": { 00:07:06.405 "mask": "0x20", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "ftl": { 00:07:06.405 "mask": "0x40", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "blobfs": { 00:07:06.405 "mask": "0x80", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "dsa": { 00:07:06.405 "mask": "0x200", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "thread": { 00:07:06.405 "mask": "0x400", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "nvme_pcie": { 00:07:06.405 "mask": "0x800", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "iaa": { 00:07:06.405 "mask": "0x1000", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "nvme_tcp": { 00:07:06.405 "mask": "0x2000", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "bdev_nvme": { 00:07:06.405 "mask": "0x4000", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "sock": { 00:07:06.405 "mask": "0x8000", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "blob": { 00:07:06.405 "mask": "0x10000", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "bdev_raid": { 00:07:06.405 "mask": "0x20000", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 }, 00:07:06.405 "scheduler": { 00:07:06.405 "mask": "0x40000", 00:07:06.405 "tpoint_mask": "0x0" 00:07:06.405 } 00:07:06.405 }' 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:06.405 12:51:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:06.405 12:51:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:06.405 12:51:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:06.405 12:51:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:06.666 12:51:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:06.666 12:51:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:06.666 12:51:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:06.666 00:07:06.666 real 0m0.254s 00:07:06.666 user 0m0.207s 00:07:06.666 sys 0m0.039s 00:07:06.666 12:51:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.666 12:51:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 ************************************ 00:07:06.666 END TEST rpc_trace_cmd_test 00:07:06.666 ************************************ 00:07:06.666 12:51:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:06.666 12:51:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:06.666 12:51:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:06.666 12:51:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.666 12:51:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.666 12:51:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 ************************************ 00:07:06.666 START TEST rpc_daemon_integrity 00:07:06.666 ************************************ 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:06.666 { 00:07:06.666 "name": "Malloc2", 00:07:06.666 "aliases": [ 00:07:06.666 "612cf8bb-3754-48b5-80fc-7a5523a2b318" 00:07:06.666 ], 00:07:06.666 "product_name": "Malloc disk", 00:07:06.666 "block_size": 512, 00:07:06.666 "num_blocks": 16384, 00:07:06.666 "uuid": "612cf8bb-3754-48b5-80fc-7a5523a2b318", 00:07:06.666 "assigned_rate_limits": { 00:07:06.666 "rw_ios_per_sec": 0, 00:07:06.666 "rw_mbytes_per_sec": 0, 00:07:06.666 "r_mbytes_per_sec": 0, 00:07:06.666 "w_mbytes_per_sec": 0 00:07:06.666 }, 00:07:06.666 "claimed": false, 00:07:06.666 "zoned": false, 00:07:06.666 "supported_io_types": { 00:07:06.666 "read": true, 00:07:06.666 "write": true, 00:07:06.666 "unmap": true, 00:07:06.666 "flush": true, 00:07:06.666 "reset": true, 00:07:06.666 "nvme_admin": false, 00:07:06.666 "nvme_io": false, 00:07:06.666 "nvme_io_md": false, 00:07:06.666 "write_zeroes": true, 00:07:06.666 "zcopy": true, 00:07:06.666 "get_zone_info": false, 00:07:06.666 "zone_management": false, 00:07:06.666 "zone_append": false, 00:07:06.666 "compare": false, 00:07:06.666 "compare_and_write": false, 00:07:06.666 "abort": true, 00:07:06.666 "seek_hole": false, 00:07:06.666 "seek_data": false, 00:07:06.666 "copy": true, 00:07:06.666 "nvme_iov_md": false 00:07:06.666 }, 00:07:06.666 "memory_domains": [ 00:07:06.666 { 00:07:06.666 "dma_device_id": "system", 00:07:06.666 "dma_device_type": 1 00:07:06.666 }, 00:07:06.666 { 00:07:06.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.666 "dma_device_type": 2 00:07:06.666 } 00:07:06.666 ], 00:07:06.666 "driver_specific": {} 00:07:06.666 } 00:07:06.666 ]' 00:07:06.666 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.928 [2024-11-29 12:51:09.360243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:06.928 [2024-11-29 12:51:09.360287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.928 [2024-11-29 12:51:09.360303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2379fe0 00:07:06.928 [2024-11-29 12:51:09.360311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.928 [2024-11-29 12:51:09.361806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.928 [2024-11-29 12:51:09.361842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:06.928 Passthru0 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:06.928 { 00:07:06.928 "name": "Malloc2", 00:07:06.928 "aliases": [ 00:07:06.928 "612cf8bb-3754-48b5-80fc-7a5523a2b318" 00:07:06.928 ], 00:07:06.928 "product_name": "Malloc disk", 00:07:06.928 "block_size": 512, 00:07:06.928 "num_blocks": 16384, 00:07:06.928 "uuid": "612cf8bb-3754-48b5-80fc-7a5523a2b318", 00:07:06.928 "assigned_rate_limits": { 00:07:06.928 "rw_ios_per_sec": 0, 00:07:06.928 "rw_mbytes_per_sec": 0, 00:07:06.928 "r_mbytes_per_sec": 0, 00:07:06.928 "w_mbytes_per_sec": 0 00:07:06.928 }, 00:07:06.928 "claimed": true, 00:07:06.928 "claim_type": "exclusive_write", 00:07:06.928 "zoned": false, 00:07:06.928 "supported_io_types": { 00:07:06.928 "read": true, 00:07:06.928 "write": true, 00:07:06.928 "unmap": true, 00:07:06.928 "flush": true, 00:07:06.928 "reset": true, 00:07:06.928 "nvme_admin": false, 00:07:06.928 "nvme_io": false, 00:07:06.928 "nvme_io_md": false, 00:07:06.928 "write_zeroes": true, 00:07:06.928 "zcopy": true, 00:07:06.928 "get_zone_info": false, 00:07:06.928 "zone_management": false, 00:07:06.928 "zone_append": false, 00:07:06.928 "compare": false, 00:07:06.928 "compare_and_write": false, 00:07:06.928 "abort": true, 00:07:06.928 "seek_hole": false, 00:07:06.928 "seek_data": false, 00:07:06.928 "copy": true, 00:07:06.928 "nvme_iov_md": false 00:07:06.928 }, 00:07:06.928 "memory_domains": [ 00:07:06.928 { 00:07:06.928 "dma_device_id": "system", 00:07:06.928 "dma_device_type": 1 00:07:06.928 }, 00:07:06.928 { 00:07:06.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.928 "dma_device_type": 2 00:07:06.928 } 00:07:06.928 ], 00:07:06.928 "driver_specific": {} 00:07:06.928 }, 00:07:06.928 { 00:07:06.928 "name": "Passthru0", 00:07:06.928 "aliases": [ 00:07:06.928 "2159ddaa-26fe-515d-b911-0ccf9367975b" 00:07:06.928 ], 00:07:06.928 "product_name": "passthru", 00:07:06.928 "block_size": 512, 00:07:06.928 "num_blocks": 16384, 00:07:06.928 "uuid": "2159ddaa-26fe-515d-b911-0ccf9367975b", 00:07:06.928 "assigned_rate_limits": { 00:07:06.928 "rw_ios_per_sec": 0, 00:07:06.928 "rw_mbytes_per_sec": 0, 00:07:06.928 "r_mbytes_per_sec": 0, 00:07:06.928 "w_mbytes_per_sec": 0 00:07:06.928 }, 00:07:06.928 "claimed": false, 00:07:06.928 "zoned": false, 00:07:06.928 "supported_io_types": { 00:07:06.928 "read": true, 00:07:06.928 "write": true, 00:07:06.928 "unmap": true, 00:07:06.928 "flush": true, 00:07:06.928 "reset": true, 00:07:06.928 "nvme_admin": false, 00:07:06.928 "nvme_io": false, 00:07:06.928 "nvme_io_md": false, 00:07:06.928 "write_zeroes": true, 00:07:06.928 "zcopy": true, 00:07:06.928 "get_zone_info": false, 00:07:06.928 "zone_management": false, 00:07:06.928 "zone_append": false, 00:07:06.928 "compare": false, 00:07:06.928 "compare_and_write": false, 00:07:06.928 "abort": true, 00:07:06.928 "seek_hole": false, 00:07:06.928 "seek_data": false, 00:07:06.928 "copy": true, 00:07:06.928 "nvme_iov_md": false 00:07:06.928 }, 00:07:06.928 "memory_domains": [ 00:07:06.928 { 00:07:06.928 "dma_device_id": "system", 00:07:06.928 "dma_device_type": 1 00:07:06.928 }, 00:07:06.928 { 00:07:06.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.928 "dma_device_type": 2 00:07:06.928 } 00:07:06.928 ], 00:07:06.928 "driver_specific": { 00:07:06.928 "passthru": { 00:07:06.928 "name": "Passthru0", 00:07:06.928 "base_bdev_name": "Malloc2" 00:07:06.928 } 00:07:06.928 } 00:07:06.928 } 00:07:06.928 ]' 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.928 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:06.929 00:07:06.929 real 0m0.302s 00:07:06.929 user 0m0.185s 00:07:06.929 sys 0m0.053s 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.929 12:51:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.929 ************************************ 00:07:06.929 END TEST rpc_daemon_integrity 00:07:06.929 ************************************ 00:07:06.929 12:51:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:06.929 12:51:09 rpc -- rpc/rpc.sh@84 -- # killprocess 682598 00:07:06.929 12:51:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 682598 ']' 00:07:06.929 12:51:09 rpc -- common/autotest_common.sh@958 -- # kill -0 682598 00:07:06.929 12:51:09 rpc -- common/autotest_common.sh@959 -- # uname 00:07:06.929 12:51:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.929 12:51:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 682598 00:07:07.190 12:51:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.190 12:51:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.190 12:51:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 682598' 00:07:07.190 killing process with pid 682598 00:07:07.190 12:51:09 rpc -- common/autotest_common.sh@973 -- # kill 682598 00:07:07.190 12:51:09 rpc -- common/autotest_common.sh@978 -- # wait 682598 00:07:07.452 00:07:07.452 real 0m2.709s 00:07:07.452 user 0m3.414s 00:07:07.452 sys 0m0.871s 00:07:07.452 12:51:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.452 12:51:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.452 ************************************ 00:07:07.452 END TEST rpc 00:07:07.452 ************************************ 00:07:07.452 12:51:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:07.452 12:51:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.452 12:51:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.452 12:51:09 -- common/autotest_common.sh@10 -- # set +x 00:07:07.452 ************************************ 00:07:07.452 START TEST skip_rpc 00:07:07.452 ************************************ 00:07:07.452 12:51:09 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:07.452 * Looking for test storage... 00:07:07.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:07.452 12:51:10 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.452 12:51:10 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.452 12:51:10 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.713 12:51:10 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.713 12:51:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:07.713 12:51:10 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.713 12:51:10 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.713 --rc genhtml_branch_coverage=1 00:07:07.713 --rc genhtml_function_coverage=1 00:07:07.713 --rc genhtml_legend=1 00:07:07.713 --rc geninfo_all_blocks=1 00:07:07.713 --rc geninfo_unexecuted_blocks=1 00:07:07.713 00:07:07.713 ' 00:07:07.713 12:51:10 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.713 --rc genhtml_branch_coverage=1 00:07:07.713 --rc genhtml_function_coverage=1 00:07:07.713 --rc genhtml_legend=1 00:07:07.713 --rc geninfo_all_blocks=1 00:07:07.713 --rc geninfo_unexecuted_blocks=1 00:07:07.713 00:07:07.713 ' 00:07:07.713 12:51:10 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.713 --rc genhtml_branch_coverage=1 00:07:07.713 --rc genhtml_function_coverage=1 00:07:07.713 --rc genhtml_legend=1 00:07:07.713 --rc geninfo_all_blocks=1 00:07:07.713 --rc geninfo_unexecuted_blocks=1 00:07:07.713 00:07:07.713 ' 00:07:07.714 12:51:10 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.714 --rc genhtml_branch_coverage=1 00:07:07.714 --rc genhtml_function_coverage=1 00:07:07.714 --rc genhtml_legend=1 00:07:07.714 --rc geninfo_all_blocks=1 00:07:07.714 --rc geninfo_unexecuted_blocks=1 00:07:07.714 00:07:07.714 ' 00:07:07.714 12:51:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:07.714 12:51:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:07.714 12:51:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:07.714 12:51:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.714 12:51:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.714 12:51:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.714 ************************************ 00:07:07.714 START TEST skip_rpc 00:07:07.714 ************************************ 00:07:07.714 12:51:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:07.714 12:51:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=683781 00:07:07.714 12:51:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.714 12:51:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:07.714 12:51:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:07.714 [2024-11-29 12:51:10.267003] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:07.714 [2024-11-29 12:51:10.267068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683781 ] 00:07:07.714 [2024-11-29 12:51:10.364668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.975 [2024-11-29 12:51:10.418519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 683781 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 683781 ']' 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 683781 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 683781 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 683781' 00:07:13.259 killing process with pid 683781 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 683781 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 683781 00:07:13.259 00:07:13.259 real 0m5.266s 00:07:13.259 user 0m5.004s 00:07:13.259 sys 0m0.307s 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.259 12:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.259 ************************************ 00:07:13.259 END TEST skip_rpc 00:07:13.259 ************************************ 00:07:13.259 12:51:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:13.259 12:51:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.259 12:51:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.259 12:51:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.259 ************************************ 00:07:13.259 START TEST skip_rpc_with_json 00:07:13.259 ************************************ 00:07:13.259 12:51:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:13.259 12:51:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:13.259 12:51:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=684948 00:07:13.259 12:51:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.260 12:51:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 684948 00:07:13.260 12:51:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.260 12:51:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 684948 ']' 00:07:13.260 12:51:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.260 12:51:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.260 12:51:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.260 12:51:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.260 12:51:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:13.260 [2024-11-29 12:51:15.610038] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:13.260 [2024-11-29 12:51:15.610091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684948 ] 00:07:13.260 [2024-11-29 12:51:15.694425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.260 [2024-11-29 12:51:15.725288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:13.830 [2024-11-29 12:51:16.411461] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:13.830 request: 00:07:13.830 { 00:07:13.830 "trtype": "tcp", 00:07:13.830 "method": "nvmf_get_transports", 00:07:13.830 "req_id": 1 00:07:13.830 } 00:07:13.830 Got JSON-RPC error response 00:07:13.830 response: 00:07:13.830 { 00:07:13.830 "code": -19, 00:07:13.830 "message": "No such device" 00:07:13.830 } 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:13.830 [2024-11-29 12:51:16.423554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.830 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.091 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.091 12:51:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:14.091 { 00:07:14.091 "subsystems": [ 00:07:14.091 { 00:07:14.091 "subsystem": "fsdev", 00:07:14.091 "config": [ 00:07:14.091 { 00:07:14.091 "method": "fsdev_set_opts", 00:07:14.091 "params": { 00:07:14.091 "fsdev_io_pool_size": 65535, 00:07:14.091 "fsdev_io_cache_size": 256 00:07:14.091 } 00:07:14.091 } 00:07:14.091 ] 00:07:14.091 }, 00:07:14.091 { 00:07:14.091 "subsystem": "vfio_user_target", 00:07:14.091 "config": null 00:07:14.091 }, 00:07:14.091 { 00:07:14.091 "subsystem": "keyring", 00:07:14.091 "config": [] 00:07:14.091 }, 00:07:14.091 { 00:07:14.091 "subsystem": "iobuf", 00:07:14.091 "config": [ 00:07:14.091 { 00:07:14.091 "method": "iobuf_set_options", 00:07:14.091 "params": { 00:07:14.091 "small_pool_count": 8192, 00:07:14.091 "large_pool_count": 1024, 00:07:14.091 "small_bufsize": 8192, 00:07:14.091 "large_bufsize": 135168, 00:07:14.091 "enable_numa": false 00:07:14.091 } 00:07:14.091 } 00:07:14.091 ] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "sock", 00:07:14.092 "config": [ 00:07:14.092 { 00:07:14.092 "method": "sock_set_default_impl", 00:07:14.092 "params": { 00:07:14.092 "impl_name": "posix" 00:07:14.092 } 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "method": "sock_impl_set_options", 00:07:14.092 "params": { 00:07:14.092 "impl_name": "ssl", 00:07:14.092 "recv_buf_size": 4096, 00:07:14.092 "send_buf_size": 4096, 00:07:14.092 "enable_recv_pipe": true, 00:07:14.092 "enable_quickack": false, 00:07:14.092 "enable_placement_id": 0, 00:07:14.092 "enable_zerocopy_send_server": true, 00:07:14.092 "enable_zerocopy_send_client": false, 00:07:14.092 "zerocopy_threshold": 0, 00:07:14.092 "tls_version": 0, 00:07:14.092 "enable_ktls": false 00:07:14.092 } 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "method": "sock_impl_set_options", 00:07:14.092 "params": { 00:07:14.092 "impl_name": "posix", 00:07:14.092 "recv_buf_size": 2097152, 00:07:14.092 "send_buf_size": 2097152, 00:07:14.092 "enable_recv_pipe": true, 00:07:14.092 "enable_quickack": false, 00:07:14.092 "enable_placement_id": 0, 00:07:14.092 "enable_zerocopy_send_server": true, 00:07:14.092 "enable_zerocopy_send_client": false, 00:07:14.092 "zerocopy_threshold": 0, 00:07:14.092 "tls_version": 0, 00:07:14.092 "enable_ktls": false 00:07:14.092 } 00:07:14.092 } 00:07:14.092 ] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "vmd", 00:07:14.092 "config": [] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "accel", 00:07:14.092 "config": [ 00:07:14.092 { 00:07:14.092 "method": "accel_set_options", 00:07:14.092 "params": { 00:07:14.092 "small_cache_size": 128, 00:07:14.092 "large_cache_size": 16, 00:07:14.092 "task_count": 2048, 00:07:14.092 "sequence_count": 2048, 00:07:14.092 "buf_count": 2048 00:07:14.092 } 00:07:14.092 } 00:07:14.092 ] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "bdev", 00:07:14.092 "config": [ 00:07:14.092 { 00:07:14.092 "method": "bdev_set_options", 00:07:14.092 "params": { 00:07:14.092 "bdev_io_pool_size": 65535, 00:07:14.092 "bdev_io_cache_size": 256, 00:07:14.092 "bdev_auto_examine": true, 00:07:14.092 "iobuf_small_cache_size": 128, 00:07:14.092 "iobuf_large_cache_size": 16 00:07:14.092 } 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "method": "bdev_raid_set_options", 00:07:14.092 "params": { 00:07:14.092 "process_window_size_kb": 1024, 00:07:14.092 "process_max_bandwidth_mb_sec": 0 00:07:14.092 } 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "method": "bdev_iscsi_set_options", 00:07:14.092 "params": { 00:07:14.092 "timeout_sec": 30 00:07:14.092 } 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "method": "bdev_nvme_set_options", 00:07:14.092 "params": { 00:07:14.092 "action_on_timeout": "none", 00:07:14.092 "timeout_us": 0, 00:07:14.092 "timeout_admin_us": 0, 00:07:14.092 "keep_alive_timeout_ms": 10000, 00:07:14.092 "arbitration_burst": 0, 00:07:14.092 "low_priority_weight": 0, 00:07:14.092 "medium_priority_weight": 0, 00:07:14.092 "high_priority_weight": 0, 00:07:14.092 "nvme_adminq_poll_period_us": 10000, 00:07:14.092 "nvme_ioq_poll_period_us": 0, 00:07:14.092 "io_queue_requests": 0, 00:07:14.092 "delay_cmd_submit": true, 00:07:14.092 "transport_retry_count": 4, 00:07:14.092 "bdev_retry_count": 3, 00:07:14.092 "transport_ack_timeout": 0, 00:07:14.092 "ctrlr_loss_timeout_sec": 0, 00:07:14.092 "reconnect_delay_sec": 0, 00:07:14.092 "fast_io_fail_timeout_sec": 0, 00:07:14.092 "disable_auto_failback": false, 00:07:14.092 "generate_uuids": false, 00:07:14.092 "transport_tos": 0, 00:07:14.092 "nvme_error_stat": false, 00:07:14.092 "rdma_srq_size": 0, 00:07:14.092 "io_path_stat": false, 00:07:14.092 "allow_accel_sequence": false, 00:07:14.092 "rdma_max_cq_size": 0, 00:07:14.092 "rdma_cm_event_timeout_ms": 0, 00:07:14.092 "dhchap_digests": [ 00:07:14.092 "sha256", 00:07:14.092 "sha384", 00:07:14.092 "sha512" 00:07:14.092 ], 00:07:14.092 "dhchap_dhgroups": [ 00:07:14.092 "null", 00:07:14.092 "ffdhe2048", 00:07:14.092 "ffdhe3072", 00:07:14.092 "ffdhe4096", 00:07:14.092 "ffdhe6144", 00:07:14.092 "ffdhe8192" 00:07:14.092 ] 00:07:14.092 } 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "method": "bdev_nvme_set_hotplug", 00:07:14.092 "params": { 00:07:14.092 "period_us": 100000, 00:07:14.092 "enable": false 00:07:14.092 } 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "method": "bdev_wait_for_examine" 00:07:14.092 } 00:07:14.092 ] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "scsi", 00:07:14.092 "config": null 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "scheduler", 00:07:14.092 "config": [ 00:07:14.092 { 00:07:14.092 "method": "framework_set_scheduler", 00:07:14.092 "params": { 00:07:14.092 "name": "static" 00:07:14.092 } 00:07:14.092 } 00:07:14.092 ] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "vhost_scsi", 00:07:14.092 "config": [] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "vhost_blk", 00:07:14.092 "config": [] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "ublk", 00:07:14.092 "config": [] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "nbd", 00:07:14.092 "config": [] 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "subsystem": "nvmf", 00:07:14.092 "config": [ 00:07:14.092 { 00:07:14.092 "method": "nvmf_set_config", 00:07:14.092 "params": { 00:07:14.092 "discovery_filter": "match_any", 00:07:14.092 "admin_cmd_passthru": { 00:07:14.092 "identify_ctrlr": false 00:07:14.092 }, 00:07:14.092 "dhchap_digests": [ 00:07:14.092 "sha256", 00:07:14.092 "sha384", 00:07:14.092 "sha512" 00:07:14.092 ], 00:07:14.092 "dhchap_dhgroups": [ 00:07:14.092 "null", 00:07:14.092 "ffdhe2048", 00:07:14.092 "ffdhe3072", 00:07:14.092 "ffdhe4096", 00:07:14.092 "ffdhe6144", 00:07:14.092 "ffdhe8192" 00:07:14.092 ] 00:07:14.092 } 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "method": "nvmf_set_max_subsystems", 00:07:14.092 "params": { 00:07:14.092 "max_subsystems": 1024 00:07:14.092 } 00:07:14.092 }, 00:07:14.093 { 00:07:14.093 "method": "nvmf_set_crdt", 00:07:14.093 "params": { 00:07:14.093 "crdt1": 0, 00:07:14.093 "crdt2": 0, 00:07:14.093 "crdt3": 0 00:07:14.093 } 00:07:14.093 }, 00:07:14.093 { 00:07:14.093 "method": "nvmf_create_transport", 00:07:14.093 "params": { 00:07:14.093 "trtype": "TCP", 00:07:14.093 "max_queue_depth": 128, 00:07:14.093 "max_io_qpairs_per_ctrlr": 127, 00:07:14.093 "in_capsule_data_size": 4096, 00:07:14.093 "max_io_size": 131072, 00:07:14.093 "io_unit_size": 131072, 00:07:14.093 "max_aq_depth": 128, 00:07:14.093 "num_shared_buffers": 511, 00:07:14.093 "buf_cache_size": 4294967295, 00:07:14.093 "dif_insert_or_strip": false, 00:07:14.093 "zcopy": false, 00:07:14.093 "c2h_success": true, 00:07:14.093 "sock_priority": 0, 00:07:14.093 "abort_timeout_sec": 1, 00:07:14.093 "ack_timeout": 0, 00:07:14.093 "data_wr_pool_size": 0 00:07:14.093 } 00:07:14.093 } 00:07:14.093 ] 00:07:14.093 }, 00:07:14.093 { 00:07:14.093 "subsystem": "iscsi", 00:07:14.093 "config": [ 00:07:14.093 { 00:07:14.093 "method": "iscsi_set_options", 00:07:14.093 "params": { 00:07:14.093 "node_base": "iqn.2016-06.io.spdk", 00:07:14.093 "max_sessions": 128, 00:07:14.093 "max_connections_per_session": 2, 00:07:14.093 "max_queue_depth": 64, 00:07:14.093 "default_time2wait": 2, 00:07:14.093 "default_time2retain": 20, 00:07:14.093 "first_burst_length": 8192, 00:07:14.093 "immediate_data": true, 00:07:14.093 "allow_duplicated_isid": false, 00:07:14.093 "error_recovery_level": 0, 00:07:14.093 "nop_timeout": 60, 00:07:14.093 "nop_in_interval": 30, 00:07:14.093 "disable_chap": false, 00:07:14.093 "require_chap": false, 00:07:14.093 "mutual_chap": false, 00:07:14.093 "chap_group": 0, 00:07:14.093 "max_large_datain_per_connection": 64, 00:07:14.093 "max_r2t_per_connection": 4, 00:07:14.093 "pdu_pool_size": 36864, 00:07:14.093 "immediate_data_pool_size": 16384, 00:07:14.093 "data_out_pool_size": 2048 00:07:14.093 } 00:07:14.093 } 00:07:14.093 ] 00:07:14.093 } 00:07:14.093 ] 00:07:14.093 } 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 684948 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 684948 ']' 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 684948 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 684948 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 684948' 00:07:14.093 killing process with pid 684948 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 684948 00:07:14.093 12:51:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 684948 00:07:14.360 12:51:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=685288 00:07:14.360 12:51:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:14.360 12:51:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 685288 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 685288 ']' 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 685288 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 685288 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 685288' 00:07:19.649 killing process with pid 685288 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 685288 00:07:19.649 12:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 685288 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:19.649 00:07:19.649 real 0m6.565s 00:07:19.649 user 0m6.481s 00:07:19.649 sys 0m0.563s 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:19.649 ************************************ 00:07:19.649 END TEST skip_rpc_with_json 00:07:19.649 ************************************ 00:07:19.649 12:51:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:19.649 12:51:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.649 12:51:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.649 12:51:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.649 ************************************ 00:07:19.649 START TEST skip_rpc_with_delay 00:07:19.649 ************************************ 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:19.649 [2024-11-29 12:51:22.252022] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.649 00:07:19.649 real 0m0.075s 00:07:19.649 user 0m0.048s 00:07:19.649 sys 0m0.027s 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.649 12:51:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:19.649 ************************************ 00:07:19.649 END TEST skip_rpc_with_delay 00:07:19.649 ************************************ 00:07:19.649 12:51:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:19.649 12:51:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:19.649 12:51:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:19.650 12:51:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.650 12:51:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.650 12:51:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.910 ************************************ 00:07:19.910 START TEST exit_on_failed_rpc_init 00:07:19.910 ************************************ 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=686359 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 686359 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 686359 ']' 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.910 12:51:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:19.910 [2024-11-29 12:51:22.418314] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:19.910 [2024-11-29 12:51:22.418382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686359 ] 00:07:19.910 [2024-11-29 12:51:22.505970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.910 [2024-11-29 12:51:22.538381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.851 [2024-11-29 12:51:23.279794] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:20.851 [2024-11-29 12:51:23.279845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686589 ] 00:07:20.851 [2024-11-29 12:51:23.366850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.851 [2024-11-29 12:51:23.402976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.851 [2024-11-29 12:51:23.403030] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:20.851 [2024-11-29 12:51:23.403040] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:20.851 [2024-11-29 12:51:23.403047] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 686359 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 686359 ']' 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 686359 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 686359 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 686359' 00:07:20.851 killing process with pid 686359 00:07:20.851 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 686359 00:07:20.852 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 686359 00:07:21.112 00:07:21.112 real 0m1.343s 00:07:21.112 user 0m1.585s 00:07:21.112 sys 0m0.388s 00:07:21.112 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.112 12:51:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:21.112 ************************************ 00:07:21.112 END TEST exit_on_failed_rpc_init 00:07:21.112 ************************************ 00:07:21.112 12:51:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:21.112 00:07:21.112 real 0m13.779s 00:07:21.112 user 0m13.356s 00:07:21.112 sys 0m1.604s 00:07:21.112 12:51:23 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.112 12:51:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.112 ************************************ 00:07:21.112 END TEST skip_rpc 00:07:21.112 ************************************ 00:07:21.112 12:51:23 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:21.112 12:51:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.112 12:51:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.112 12:51:23 -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 ************************************ 00:07:21.373 START TEST rpc_client 00:07:21.373 ************************************ 00:07:21.373 12:51:23 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:21.373 * Looking for test storage... 00:07:21.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:21.373 12:51:23 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.373 12:51:23 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.373 12:51:23 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.373 12:51:23 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.373 12:51:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:21.373 12:51:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:21.373 12:51:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.373 12:51:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:21.373 12:51:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.373 12:51:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.373 12:51:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.373 12:51:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:21.373 12:51:24 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.373 12:51:24 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.373 --rc genhtml_branch_coverage=1 00:07:21.373 --rc genhtml_function_coverage=1 00:07:21.373 --rc genhtml_legend=1 00:07:21.373 --rc geninfo_all_blocks=1 00:07:21.373 --rc geninfo_unexecuted_blocks=1 00:07:21.373 00:07:21.373 ' 00:07:21.373 12:51:24 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.373 --rc genhtml_branch_coverage=1 00:07:21.373 --rc genhtml_function_coverage=1 00:07:21.373 --rc genhtml_legend=1 00:07:21.373 --rc geninfo_all_blocks=1 00:07:21.373 --rc geninfo_unexecuted_blocks=1 00:07:21.373 00:07:21.373 ' 00:07:21.373 12:51:24 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.373 --rc genhtml_branch_coverage=1 00:07:21.373 --rc genhtml_function_coverage=1 00:07:21.373 --rc genhtml_legend=1 00:07:21.373 --rc geninfo_all_blocks=1 00:07:21.373 --rc geninfo_unexecuted_blocks=1 00:07:21.373 00:07:21.373 ' 00:07:21.373 12:51:24 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.373 --rc genhtml_branch_coverage=1 00:07:21.373 --rc genhtml_function_coverage=1 00:07:21.373 --rc genhtml_legend=1 00:07:21.373 --rc geninfo_all_blocks=1 00:07:21.373 --rc geninfo_unexecuted_blocks=1 00:07:21.373 00:07:21.373 ' 00:07:21.373 12:51:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:21.373 OK 00:07:21.373 12:51:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:21.373 00:07:21.373 real 0m0.221s 00:07:21.373 user 0m0.131s 00:07:21.373 sys 0m0.103s 00:07:21.373 12:51:24 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.373 12:51:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 ************************************ 00:07:21.373 END TEST rpc_client 00:07:21.373 ************************************ 00:07:21.634 12:51:24 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:21.634 12:51:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.634 12:51:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.634 12:51:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.634 ************************************ 00:07:21.634 START TEST json_config 00:07:21.634 ************************************ 00:07:21.634 12:51:24 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:21.634 12:51:24 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.634 12:51:24 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.634 12:51:24 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.634 12:51:24 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.634 12:51:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.634 12:51:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.634 12:51:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.634 12:51:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.634 12:51:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.634 12:51:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.634 12:51:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.634 12:51:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.634 12:51:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.634 12:51:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.634 12:51:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.634 12:51:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:21.634 12:51:24 json_config -- scripts/common.sh@345 -- # : 1 00:07:21.634 12:51:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.634 12:51:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.634 12:51:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:21.634 12:51:24 json_config -- scripts/common.sh@353 -- # local d=1 00:07:21.634 12:51:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.634 12:51:24 json_config -- scripts/common.sh@355 -- # echo 1 00:07:21.634 12:51:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.634 12:51:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:21.634 12:51:24 json_config -- scripts/common.sh@353 -- # local d=2 00:07:21.634 12:51:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.634 12:51:24 json_config -- scripts/common.sh@355 -- # echo 2 00:07:21.634 12:51:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.634 12:51:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.634 12:51:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.634 12:51:24 json_config -- scripts/common.sh@368 -- # return 0 00:07:21.634 12:51:24 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.634 12:51:24 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.634 --rc genhtml_branch_coverage=1 00:07:21.634 --rc genhtml_function_coverage=1 00:07:21.634 --rc genhtml_legend=1 00:07:21.634 --rc geninfo_all_blocks=1 00:07:21.634 --rc geninfo_unexecuted_blocks=1 00:07:21.635 00:07:21.635 ' 00:07:21.635 12:51:24 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.635 --rc genhtml_branch_coverage=1 00:07:21.635 --rc genhtml_function_coverage=1 00:07:21.635 --rc genhtml_legend=1 00:07:21.635 --rc geninfo_all_blocks=1 00:07:21.635 --rc geninfo_unexecuted_blocks=1 00:07:21.635 00:07:21.635 ' 00:07:21.635 12:51:24 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.635 --rc genhtml_branch_coverage=1 00:07:21.635 --rc genhtml_function_coverage=1 00:07:21.635 --rc genhtml_legend=1 00:07:21.635 --rc geninfo_all_blocks=1 00:07:21.635 --rc geninfo_unexecuted_blocks=1 00:07:21.635 00:07:21.635 ' 00:07:21.635 12:51:24 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.635 --rc genhtml_branch_coverage=1 00:07:21.635 --rc genhtml_function_coverage=1 00:07:21.635 --rc genhtml_legend=1 00:07:21.635 --rc geninfo_all_blocks=1 00:07:21.635 --rc geninfo_unexecuted_blocks=1 00:07:21.635 00:07:21.635 ' 00:07:21.635 12:51:24 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.635 12:51:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.635 12:51:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.635 12:51:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.635 12:51:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.635 12:51:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.635 12:51:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.635 12:51:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.635 12:51:24 json_config -- paths/export.sh@5 -- # export PATH 00:07:21.635 12:51:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@51 -- # : 0 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.635 12:51:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.635 12:51:24 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:21.635 12:51:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:21.635 12:51:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:21.635 12:51:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:21.635 12:51:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:21.635 12:51:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:21.898 INFO: JSON configuration test init 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.898 12:51:24 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:21.898 12:51:24 json_config -- json_config/common.sh@9 -- # local app=target 00:07:21.898 12:51:24 json_config -- json_config/common.sh@10 -- # shift 00:07:21.898 12:51:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:21.898 12:51:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:21.898 12:51:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:21.898 12:51:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.898 12:51:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.898 12:51:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=686827 00:07:21.898 12:51:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:21.898 Waiting for target to run... 00:07:21.898 12:51:24 json_config -- json_config/common.sh@25 -- # waitforlisten 686827 /var/tmp/spdk_tgt.sock 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 686827 ']' 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:21.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:21.898 12:51:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.898 12:51:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.898 [2024-11-29 12:51:24.389754] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:21.898 [2024-11-29 12:51:24.389831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686827 ] 00:07:22.159 [2024-11-29 12:51:24.722351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.159 [2024-11-29 12:51:24.747262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.729 12:51:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.729 12:51:25 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:22.729 12:51:25 json_config -- json_config/common.sh@26 -- # echo '' 00:07:22.729 00:07:22.729 12:51:25 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:22.729 12:51:25 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:22.729 12:51:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.729 12:51:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.729 12:51:25 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:22.729 12:51:25 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:22.729 12:51:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:22.729 12:51:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.729 12:51:25 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:22.729 12:51:25 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:22.729 12:51:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:23.299 12:51:25 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:23.299 12:51:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:23.299 12:51:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.300 12:51:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:23.300 12:51:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:23.300 12:51:25 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:23.560 12:51:25 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:23.560 12:51:25 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:23.560 12:51:25 json_config -- json_config/json_config.sh@54 -- # sort 00:07:23.560 12:51:25 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:23.560 12:51:25 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:23.560 12:51:25 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:23.560 12:51:25 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:23.560 12:51:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.560 12:51:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:23.560 12:51:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.560 12:51:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:23.560 12:51:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:23.560 MallocForNvmf0 00:07:23.560 12:51:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:23.560 12:51:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:23.821 MallocForNvmf1 00:07:23.821 12:51:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:23.821 12:51:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:24.081 [2024-11-29 12:51:26.509475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.081 12:51:26 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:24.081 12:51:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:24.081 12:51:26 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:24.081 12:51:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:24.341 12:51:26 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:24.342 12:51:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:24.602 12:51:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:24.602 12:51:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:24.602 [2024-11-29 12:51:27.179538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:24.602 12:51:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:24.602 12:51:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.602 12:51:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.602 12:51:27 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:24.602 12:51:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.602 12:51:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.602 12:51:27 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:24.602 12:51:27 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:24.602 12:51:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:24.862 MallocBdevForConfigChangeCheck 00:07:24.862 12:51:27 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:24.862 12:51:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.862 12:51:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.862 12:51:27 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:24.862 12:51:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:25.122 12:51:27 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:25.122 INFO: shutting down applications... 00:07:25.122 12:51:27 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:25.122 12:51:27 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:25.122 12:51:27 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:25.122 12:51:27 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:25.694 Calling clear_iscsi_subsystem 00:07:25.694 Calling clear_nvmf_subsystem 00:07:25.694 Calling clear_nbd_subsystem 00:07:25.694 Calling clear_ublk_subsystem 00:07:25.694 Calling clear_vhost_blk_subsystem 00:07:25.694 Calling clear_vhost_scsi_subsystem 00:07:25.694 Calling clear_bdev_subsystem 00:07:25.694 12:51:28 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:25.694 12:51:28 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:25.694 12:51:28 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:25.694 12:51:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:25.694 12:51:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:25.694 12:51:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:25.954 12:51:28 json_config -- json_config/json_config.sh@352 -- # break 00:07:25.954 12:51:28 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:25.954 12:51:28 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:25.954 12:51:28 json_config -- json_config/common.sh@31 -- # local app=target 00:07:25.954 12:51:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:25.954 12:51:28 json_config -- json_config/common.sh@35 -- # [[ -n 686827 ]] 00:07:25.954 12:51:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 686827 00:07:25.954 12:51:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:25.954 12:51:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:25.954 12:51:28 json_config -- json_config/common.sh@41 -- # kill -0 686827 00:07:25.954 12:51:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:26.525 12:51:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:26.525 12:51:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:26.525 12:51:29 json_config -- json_config/common.sh@41 -- # kill -0 686827 00:07:26.525 12:51:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:26.525 12:51:29 json_config -- json_config/common.sh@43 -- # break 00:07:26.525 12:51:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:26.525 12:51:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:26.525 SPDK target shutdown done 00:07:26.525 12:51:29 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:26.525 INFO: relaunching applications... 00:07:26.525 12:51:29 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:26.525 12:51:29 json_config -- json_config/common.sh@9 -- # local app=target 00:07:26.525 12:51:29 json_config -- json_config/common.sh@10 -- # shift 00:07:26.525 12:51:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:26.525 12:51:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:26.525 12:51:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:26.525 12:51:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.525 12:51:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.525 12:51:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=687965 00:07:26.525 12:51:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:26.525 Waiting for target to run... 00:07:26.525 12:51:29 json_config -- json_config/common.sh@25 -- # waitforlisten 687965 /var/tmp/spdk_tgt.sock 00:07:26.525 12:51:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:26.525 12:51:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 687965 ']' 00:07:26.525 12:51:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:26.525 12:51:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.525 12:51:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:26.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:26.525 12:51:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.525 12:51:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.525 [2024-11-29 12:51:29.203480] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:26.525 [2024-11-29 12:51:29.203563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687965 ] 00:07:27.098 [2024-11-29 12:51:29.532837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.098 [2024-11-29 12:51:29.564823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.669 [2024-11-29 12:51:30.069645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.669 [2024-11-29 12:51:30.102069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:27.669 12:51:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.669 12:51:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:27.669 12:51:30 json_config -- json_config/common.sh@26 -- # echo '' 00:07:27.669 00:07:27.669 12:51:30 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:27.669 12:51:30 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:27.669 INFO: Checking if target configuration is the same... 00:07:27.669 12:51:30 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:27.669 12:51:30 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:27.669 12:51:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:27.669 + '[' 2 -ne 2 ']' 00:07:27.669 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:27.669 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:27.669 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:27.669 +++ basename /dev/fd/62 00:07:27.669 ++ mktemp /tmp/62.XXX 00:07:27.669 + tmp_file_1=/tmp/62.tgg 00:07:27.669 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:27.669 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:27.669 + tmp_file_2=/tmp/spdk_tgt_config.json.iGZ 00:07:27.669 + ret=0 00:07:27.669 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:27.929 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:27.929 + diff -u /tmp/62.tgg /tmp/spdk_tgt_config.json.iGZ 00:07:27.929 + echo 'INFO: JSON config files are the same' 00:07:27.929 INFO: JSON config files are the same 00:07:27.929 + rm /tmp/62.tgg /tmp/spdk_tgt_config.json.iGZ 00:07:27.929 + exit 0 00:07:27.929 12:51:30 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:27.929 12:51:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:27.929 INFO: changing configuration and checking if this can be detected... 00:07:27.929 12:51:30 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:27.929 12:51:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:28.190 12:51:30 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:28.190 12:51:30 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:28.190 12:51:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:28.190 + '[' 2 -ne 2 ']' 00:07:28.190 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:28.190 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:28.190 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:28.190 +++ basename /dev/fd/62 00:07:28.190 ++ mktemp /tmp/62.XXX 00:07:28.190 + tmp_file_1=/tmp/62.IYw 00:07:28.190 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:28.190 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:28.190 + tmp_file_2=/tmp/spdk_tgt_config.json.0YL 00:07:28.190 + ret=0 00:07:28.190 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:28.451 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:28.451 + diff -u /tmp/62.IYw /tmp/spdk_tgt_config.json.0YL 00:07:28.451 + ret=1 00:07:28.451 + echo '=== Start of file: /tmp/62.IYw ===' 00:07:28.451 + cat /tmp/62.IYw 00:07:28.451 + echo '=== End of file: /tmp/62.IYw ===' 00:07:28.451 + echo '' 00:07:28.451 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0YL ===' 00:07:28.451 + cat /tmp/spdk_tgt_config.json.0YL 00:07:28.451 + echo '=== End of file: /tmp/spdk_tgt_config.json.0YL ===' 00:07:28.451 + echo '' 00:07:28.451 + rm /tmp/62.IYw /tmp/spdk_tgt_config.json.0YL 00:07:28.451 + exit 1 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:28.451 INFO: configuration change detected. 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:28.451 12:51:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.451 12:51:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@324 -- # [[ -n 687965 ]] 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:28.451 12:51:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.451 12:51:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:28.451 12:51:31 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:28.451 12:51:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.451 12:51:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.712 12:51:31 json_config -- json_config/json_config.sh@330 -- # killprocess 687965 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@954 -- # '[' -z 687965 ']' 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@958 -- # kill -0 687965 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@959 -- # uname 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 687965 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 687965' 00:07:28.712 killing process with pid 687965 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@973 -- # kill 687965 00:07:28.712 12:51:31 json_config -- common/autotest_common.sh@978 -- # wait 687965 00:07:28.974 12:51:31 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:28.974 12:51:31 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:28.974 12:51:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.974 12:51:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.974 12:51:31 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:28.974 12:51:31 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:28.974 INFO: Success 00:07:28.974 00:07:28.974 real 0m7.415s 00:07:28.974 user 0m8.899s 00:07:28.974 sys 0m2.012s 00:07:28.974 12:51:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.974 12:51:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.974 ************************************ 00:07:28.974 END TEST json_config 00:07:28.974 ************************************ 00:07:28.974 12:51:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:28.974 12:51:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.974 12:51:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.974 12:51:31 -- common/autotest_common.sh@10 -- # set +x 00:07:28.974 ************************************ 00:07:28.974 START TEST json_config_extra_key 00:07:28.974 ************************************ 00:07:28.974 12:51:31 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:29.237 12:51:31 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.237 12:51:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.237 12:51:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.237 12:51:31 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:29.237 12:51:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.237 12:51:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.237 --rc genhtml_branch_coverage=1 00:07:29.237 --rc genhtml_function_coverage=1 00:07:29.237 --rc genhtml_legend=1 00:07:29.237 --rc geninfo_all_blocks=1 00:07:29.237 --rc geninfo_unexecuted_blocks=1 00:07:29.237 00:07:29.237 ' 00:07:29.237 12:51:31 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.237 --rc genhtml_branch_coverage=1 00:07:29.237 --rc genhtml_function_coverage=1 00:07:29.237 --rc genhtml_legend=1 00:07:29.237 --rc geninfo_all_blocks=1 00:07:29.237 --rc geninfo_unexecuted_blocks=1 00:07:29.237 00:07:29.237 ' 00:07:29.237 12:51:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.237 --rc genhtml_branch_coverage=1 00:07:29.237 --rc genhtml_function_coverage=1 00:07:29.237 --rc genhtml_legend=1 00:07:29.237 --rc geninfo_all_blocks=1 00:07:29.237 --rc geninfo_unexecuted_blocks=1 00:07:29.237 00:07:29.237 ' 00:07:29.237 12:51:31 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.237 --rc genhtml_branch_coverage=1 00:07:29.237 --rc genhtml_function_coverage=1 00:07:29.237 --rc genhtml_legend=1 00:07:29.237 --rc geninfo_all_blocks=1 00:07:29.237 --rc geninfo_unexecuted_blocks=1 00:07:29.237 00:07:29.237 ' 00:07:29.237 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.237 12:51:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.237 12:51:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.238 12:51:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.238 12:51:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.238 12:51:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.238 12:51:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.238 12:51:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.238 12:51:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:29.238 12:51:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.238 12:51:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:29.238 INFO: launching applications... 00:07:29.238 12:51:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=688750 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:29.238 Waiting for target to run... 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 688750 /var/tmp/spdk_tgt.sock 00:07:29.238 12:51:31 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 688750 ']' 00:07:29.238 12:51:31 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:29.238 12:51:31 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.238 12:51:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:29.238 12:51:31 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:29.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:29.238 12:51:31 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.238 12:51:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:29.238 [2024-11-29 12:51:31.875072] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:29.238 [2024-11-29 12:51:31.875148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688750 ] 00:07:29.504 [2024-11-29 12:51:32.143934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.504 [2024-11-29 12:51:32.166896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.075 12:51:32 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.075 12:51:32 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:30.075 12:51:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:30.075 00:07:30.075 12:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:30.075 INFO: shutting down applications... 00:07:30.075 12:51:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:30.075 12:51:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:30.076 12:51:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:30.076 12:51:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 688750 ]] 00:07:30.076 12:51:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 688750 00:07:30.076 12:51:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:30.076 12:51:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:30.076 12:51:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 688750 00:07:30.076 12:51:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:30.646 12:51:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:30.646 12:51:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:30.646 12:51:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 688750 00:07:30.646 12:51:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:30.646 12:51:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:30.646 12:51:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:30.646 12:51:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:30.646 SPDK target shutdown done 00:07:30.646 12:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:30.646 Success 00:07:30.646 00:07:30.646 real 0m1.575s 00:07:30.646 user 0m1.190s 00:07:30.646 sys 0m0.412s 00:07:30.646 12:51:33 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.646 12:51:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:30.646 ************************************ 00:07:30.646 END TEST json_config_extra_key 00:07:30.646 ************************************ 00:07:30.646 12:51:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:30.646 12:51:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.646 12:51:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.646 12:51:33 -- common/autotest_common.sh@10 -- # set +x 00:07:30.646 ************************************ 00:07:30.646 START TEST alias_rpc 00:07:30.646 ************************************ 00:07:30.646 12:51:33 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:30.906 * Looking for test storage... 00:07:30.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:30.906 12:51:33 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.906 12:51:33 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.906 12:51:33 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.906 12:51:33 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.906 12:51:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.906 12:51:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.906 12:51:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.906 12:51:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.906 12:51:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.906 12:51:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.906 12:51:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.906 12:51:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.907 12:51:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.907 --rc genhtml_branch_coverage=1 00:07:30.907 --rc genhtml_function_coverage=1 00:07:30.907 --rc genhtml_legend=1 00:07:30.907 --rc geninfo_all_blocks=1 00:07:30.907 --rc geninfo_unexecuted_blocks=1 00:07:30.907 00:07:30.907 ' 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.907 --rc genhtml_branch_coverage=1 00:07:30.907 --rc genhtml_function_coverage=1 00:07:30.907 --rc genhtml_legend=1 00:07:30.907 --rc geninfo_all_blocks=1 00:07:30.907 --rc geninfo_unexecuted_blocks=1 00:07:30.907 00:07:30.907 ' 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.907 --rc genhtml_branch_coverage=1 00:07:30.907 --rc genhtml_function_coverage=1 00:07:30.907 --rc genhtml_legend=1 00:07:30.907 --rc geninfo_all_blocks=1 00:07:30.907 --rc geninfo_unexecuted_blocks=1 00:07:30.907 00:07:30.907 ' 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.907 --rc genhtml_branch_coverage=1 00:07:30.907 --rc genhtml_function_coverage=1 00:07:30.907 --rc genhtml_legend=1 00:07:30.907 --rc geninfo_all_blocks=1 00:07:30.907 --rc geninfo_unexecuted_blocks=1 00:07:30.907 00:07:30.907 ' 00:07:30.907 12:51:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:30.907 12:51:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=689128 00:07:30.907 12:51:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 689128 00:07:30.907 12:51:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 689128 ']' 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.907 12:51:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.907 [2024-11-29 12:51:33.511583] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:30.907 [2024-11-29 12:51:33.511638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689128 ] 00:07:31.167 [2024-11-29 12:51:33.595588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.167 [2024-11-29 12:51:33.627795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.751 12:51:34 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.751 12:51:34 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:31.751 12:51:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:32.080 12:51:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 689128 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 689128 ']' 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 689128 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689128 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689128' 00:07:32.080 killing process with pid 689128 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@973 -- # kill 689128 00:07:32.080 12:51:34 alias_rpc -- common/autotest_common.sh@978 -- # wait 689128 00:07:32.426 00:07:32.426 real 0m1.504s 00:07:32.426 user 0m1.668s 00:07:32.426 sys 0m0.415s 00:07:32.426 12:51:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.426 12:51:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.426 ************************************ 00:07:32.426 END TEST alias_rpc 00:07:32.426 ************************************ 00:07:32.426 12:51:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:32.426 12:51:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:32.426 12:51:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.426 12:51:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.426 12:51:34 -- common/autotest_common.sh@10 -- # set +x 00:07:32.426 ************************************ 00:07:32.426 START TEST spdkcli_tcp 00:07:32.426 ************************************ 00:07:32.426 12:51:34 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:32.426 * Looking for test storage... 00:07:32.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:32.426 12:51:34 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:32.426 12:51:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:32.426 12:51:34 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.426 12:51:35 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:32.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.426 --rc genhtml_branch_coverage=1 00:07:32.426 --rc genhtml_function_coverage=1 00:07:32.426 --rc genhtml_legend=1 00:07:32.426 --rc geninfo_all_blocks=1 00:07:32.426 --rc geninfo_unexecuted_blocks=1 00:07:32.426 00:07:32.426 ' 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:32.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.426 --rc genhtml_branch_coverage=1 00:07:32.426 --rc genhtml_function_coverage=1 00:07:32.426 --rc genhtml_legend=1 00:07:32.426 --rc geninfo_all_blocks=1 00:07:32.426 --rc geninfo_unexecuted_blocks=1 00:07:32.426 00:07:32.426 ' 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:32.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.426 --rc genhtml_branch_coverage=1 00:07:32.426 --rc genhtml_function_coverage=1 00:07:32.426 --rc genhtml_legend=1 00:07:32.426 --rc geninfo_all_blocks=1 00:07:32.426 --rc geninfo_unexecuted_blocks=1 00:07:32.426 00:07:32.426 ' 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:32.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.426 --rc genhtml_branch_coverage=1 00:07:32.426 --rc genhtml_function_coverage=1 00:07:32.426 --rc genhtml_legend=1 00:07:32.426 --rc geninfo_all_blocks=1 00:07:32.426 --rc geninfo_unexecuted_blocks=1 00:07:32.426 00:07:32.426 ' 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=689464 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 689464 00:07:32.426 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 689464 ']' 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.426 12:51:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.427 12:51:35 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.427 12:51:35 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.427 12:51:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.692 [2024-11-29 12:51:35.106190] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:32.692 [2024-11-29 12:51:35.106266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689464 ] 00:07:32.692 [2024-11-29 12:51:35.195897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.692 [2024-11-29 12:51:35.231863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.692 [2024-11-29 12:51:35.231864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.264 12:51:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.264 12:51:35 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:33.264 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=689571 00:07:33.264 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:33.264 12:51:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:33.524 [ 00:07:33.524 "bdev_malloc_delete", 00:07:33.524 "bdev_malloc_create", 00:07:33.524 "bdev_null_resize", 00:07:33.524 "bdev_null_delete", 00:07:33.524 "bdev_null_create", 00:07:33.524 "bdev_nvme_cuse_unregister", 00:07:33.524 "bdev_nvme_cuse_register", 00:07:33.524 "bdev_opal_new_user", 00:07:33.524 "bdev_opal_set_lock_state", 00:07:33.524 "bdev_opal_delete", 00:07:33.524 "bdev_opal_get_info", 00:07:33.524 "bdev_opal_create", 00:07:33.524 "bdev_nvme_opal_revert", 00:07:33.524 "bdev_nvme_opal_init", 00:07:33.524 "bdev_nvme_send_cmd", 00:07:33.524 "bdev_nvme_set_keys", 00:07:33.524 "bdev_nvme_get_path_iostat", 00:07:33.524 "bdev_nvme_get_mdns_discovery_info", 00:07:33.524 "bdev_nvme_stop_mdns_discovery", 00:07:33.524 "bdev_nvme_start_mdns_discovery", 00:07:33.524 "bdev_nvme_set_multipath_policy", 00:07:33.524 "bdev_nvme_set_preferred_path", 00:07:33.524 "bdev_nvme_get_io_paths", 00:07:33.524 "bdev_nvme_remove_error_injection", 00:07:33.524 "bdev_nvme_add_error_injection", 00:07:33.524 "bdev_nvme_get_discovery_info", 00:07:33.524 "bdev_nvme_stop_discovery", 00:07:33.524 "bdev_nvme_start_discovery", 00:07:33.524 "bdev_nvme_get_controller_health_info", 00:07:33.524 "bdev_nvme_disable_controller", 00:07:33.524 "bdev_nvme_enable_controller", 00:07:33.524 "bdev_nvme_reset_controller", 00:07:33.524 "bdev_nvme_get_transport_statistics", 00:07:33.524 "bdev_nvme_apply_firmware", 00:07:33.524 "bdev_nvme_detach_controller", 00:07:33.524 "bdev_nvme_get_controllers", 00:07:33.524 "bdev_nvme_attach_controller", 00:07:33.524 "bdev_nvme_set_hotplug", 00:07:33.524 "bdev_nvme_set_options", 00:07:33.524 "bdev_passthru_delete", 00:07:33.524 "bdev_passthru_create", 00:07:33.524 "bdev_lvol_set_parent_bdev", 00:07:33.524 "bdev_lvol_set_parent", 00:07:33.524 "bdev_lvol_check_shallow_copy", 00:07:33.524 "bdev_lvol_start_shallow_copy", 00:07:33.524 "bdev_lvol_grow_lvstore", 00:07:33.524 "bdev_lvol_get_lvols", 00:07:33.524 "bdev_lvol_get_lvstores", 00:07:33.524 "bdev_lvol_delete", 00:07:33.524 "bdev_lvol_set_read_only", 00:07:33.524 "bdev_lvol_resize", 00:07:33.524 "bdev_lvol_decouple_parent", 00:07:33.524 "bdev_lvol_inflate", 00:07:33.524 "bdev_lvol_rename", 00:07:33.524 "bdev_lvol_clone_bdev", 00:07:33.524 "bdev_lvol_clone", 00:07:33.524 "bdev_lvol_snapshot", 00:07:33.524 "bdev_lvol_create", 00:07:33.524 "bdev_lvol_delete_lvstore", 00:07:33.524 "bdev_lvol_rename_lvstore", 00:07:33.524 "bdev_lvol_create_lvstore", 00:07:33.524 "bdev_raid_set_options", 00:07:33.524 "bdev_raid_remove_base_bdev", 00:07:33.524 "bdev_raid_add_base_bdev", 00:07:33.524 "bdev_raid_delete", 00:07:33.524 "bdev_raid_create", 00:07:33.524 "bdev_raid_get_bdevs", 00:07:33.524 "bdev_error_inject_error", 00:07:33.524 "bdev_error_delete", 00:07:33.524 "bdev_error_create", 00:07:33.524 "bdev_split_delete", 00:07:33.525 "bdev_split_create", 00:07:33.525 "bdev_delay_delete", 00:07:33.525 "bdev_delay_create", 00:07:33.525 "bdev_delay_update_latency", 00:07:33.525 "bdev_zone_block_delete", 00:07:33.525 "bdev_zone_block_create", 00:07:33.525 "blobfs_create", 00:07:33.525 "blobfs_detect", 00:07:33.525 "blobfs_set_cache_size", 00:07:33.525 "bdev_aio_delete", 00:07:33.525 "bdev_aio_rescan", 00:07:33.525 "bdev_aio_create", 00:07:33.525 "bdev_ftl_set_property", 00:07:33.525 "bdev_ftl_get_properties", 00:07:33.525 "bdev_ftl_get_stats", 00:07:33.525 "bdev_ftl_unmap", 00:07:33.525 "bdev_ftl_unload", 00:07:33.525 "bdev_ftl_delete", 00:07:33.525 "bdev_ftl_load", 00:07:33.525 "bdev_ftl_create", 00:07:33.525 "bdev_virtio_attach_controller", 00:07:33.525 "bdev_virtio_scsi_get_devices", 00:07:33.525 "bdev_virtio_detach_controller", 00:07:33.525 "bdev_virtio_blk_set_hotplug", 00:07:33.525 "bdev_iscsi_delete", 00:07:33.525 "bdev_iscsi_create", 00:07:33.525 "bdev_iscsi_set_options", 00:07:33.525 "accel_error_inject_error", 00:07:33.525 "ioat_scan_accel_module", 00:07:33.525 "dsa_scan_accel_module", 00:07:33.525 "iaa_scan_accel_module", 00:07:33.525 "vfu_virtio_create_fs_endpoint", 00:07:33.525 "vfu_virtio_create_scsi_endpoint", 00:07:33.525 "vfu_virtio_scsi_remove_target", 00:07:33.525 "vfu_virtio_scsi_add_target", 00:07:33.525 "vfu_virtio_create_blk_endpoint", 00:07:33.525 "vfu_virtio_delete_endpoint", 00:07:33.525 "keyring_file_remove_key", 00:07:33.525 "keyring_file_add_key", 00:07:33.525 "keyring_linux_set_options", 00:07:33.525 "fsdev_aio_delete", 00:07:33.525 "fsdev_aio_create", 00:07:33.525 "iscsi_get_histogram", 00:07:33.525 "iscsi_enable_histogram", 00:07:33.525 "iscsi_set_options", 00:07:33.525 "iscsi_get_auth_groups", 00:07:33.525 "iscsi_auth_group_remove_secret", 00:07:33.525 "iscsi_auth_group_add_secret", 00:07:33.525 "iscsi_delete_auth_group", 00:07:33.525 "iscsi_create_auth_group", 00:07:33.525 "iscsi_set_discovery_auth", 00:07:33.525 "iscsi_get_options", 00:07:33.525 "iscsi_target_node_request_logout", 00:07:33.525 "iscsi_target_node_set_redirect", 00:07:33.525 "iscsi_target_node_set_auth", 00:07:33.525 "iscsi_target_node_add_lun", 00:07:33.525 "iscsi_get_stats", 00:07:33.525 "iscsi_get_connections", 00:07:33.525 "iscsi_portal_group_set_auth", 00:07:33.525 "iscsi_start_portal_group", 00:07:33.525 "iscsi_delete_portal_group", 00:07:33.525 "iscsi_create_portal_group", 00:07:33.525 "iscsi_get_portal_groups", 00:07:33.525 "iscsi_delete_target_node", 00:07:33.525 "iscsi_target_node_remove_pg_ig_maps", 00:07:33.525 "iscsi_target_node_add_pg_ig_maps", 00:07:33.525 "iscsi_create_target_node", 00:07:33.525 "iscsi_get_target_nodes", 00:07:33.525 "iscsi_delete_initiator_group", 00:07:33.525 "iscsi_initiator_group_remove_initiators", 00:07:33.525 "iscsi_initiator_group_add_initiators", 00:07:33.525 "iscsi_create_initiator_group", 00:07:33.525 "iscsi_get_initiator_groups", 00:07:33.525 "nvmf_set_crdt", 00:07:33.525 "nvmf_set_config", 00:07:33.525 "nvmf_set_max_subsystems", 00:07:33.525 "nvmf_stop_mdns_prr", 00:07:33.525 "nvmf_publish_mdns_prr", 00:07:33.525 "nvmf_subsystem_get_listeners", 00:07:33.525 "nvmf_subsystem_get_qpairs", 00:07:33.525 "nvmf_subsystem_get_controllers", 00:07:33.525 "nvmf_get_stats", 00:07:33.525 "nvmf_get_transports", 00:07:33.525 "nvmf_create_transport", 00:07:33.525 "nvmf_get_targets", 00:07:33.525 "nvmf_delete_target", 00:07:33.525 "nvmf_create_target", 00:07:33.525 "nvmf_subsystem_allow_any_host", 00:07:33.525 "nvmf_subsystem_set_keys", 00:07:33.525 "nvmf_subsystem_remove_host", 00:07:33.525 "nvmf_subsystem_add_host", 00:07:33.525 "nvmf_ns_remove_host", 00:07:33.525 "nvmf_ns_add_host", 00:07:33.525 "nvmf_subsystem_remove_ns", 00:07:33.525 "nvmf_subsystem_set_ns_ana_group", 00:07:33.525 "nvmf_subsystem_add_ns", 00:07:33.525 "nvmf_subsystem_listener_set_ana_state", 00:07:33.525 "nvmf_discovery_get_referrals", 00:07:33.525 "nvmf_discovery_remove_referral", 00:07:33.525 "nvmf_discovery_add_referral", 00:07:33.525 "nvmf_subsystem_remove_listener", 00:07:33.525 "nvmf_subsystem_add_listener", 00:07:33.525 "nvmf_delete_subsystem", 00:07:33.525 "nvmf_create_subsystem", 00:07:33.525 "nvmf_get_subsystems", 00:07:33.525 "env_dpdk_get_mem_stats", 00:07:33.525 "nbd_get_disks", 00:07:33.525 "nbd_stop_disk", 00:07:33.525 "nbd_start_disk", 00:07:33.525 "ublk_recover_disk", 00:07:33.525 "ublk_get_disks", 00:07:33.525 "ublk_stop_disk", 00:07:33.525 "ublk_start_disk", 00:07:33.525 "ublk_destroy_target", 00:07:33.525 "ublk_create_target", 00:07:33.525 "virtio_blk_create_transport", 00:07:33.525 "virtio_blk_get_transports", 00:07:33.525 "vhost_controller_set_coalescing", 00:07:33.525 "vhost_get_controllers", 00:07:33.525 "vhost_delete_controller", 00:07:33.525 "vhost_create_blk_controller", 00:07:33.525 "vhost_scsi_controller_remove_target", 00:07:33.525 "vhost_scsi_controller_add_target", 00:07:33.525 "vhost_start_scsi_controller", 00:07:33.525 "vhost_create_scsi_controller", 00:07:33.525 "thread_set_cpumask", 00:07:33.525 "scheduler_set_options", 00:07:33.525 "framework_get_governor", 00:07:33.525 "framework_get_scheduler", 00:07:33.525 "framework_set_scheduler", 00:07:33.525 "framework_get_reactors", 00:07:33.525 "thread_get_io_channels", 00:07:33.525 "thread_get_pollers", 00:07:33.525 "thread_get_stats", 00:07:33.525 "framework_monitor_context_switch", 00:07:33.525 "spdk_kill_instance", 00:07:33.525 "log_enable_timestamps", 00:07:33.525 "log_get_flags", 00:07:33.525 "log_clear_flag", 00:07:33.525 "log_set_flag", 00:07:33.525 "log_get_level", 00:07:33.525 "log_set_level", 00:07:33.525 "log_get_print_level", 00:07:33.525 "log_set_print_level", 00:07:33.525 "framework_enable_cpumask_locks", 00:07:33.525 "framework_disable_cpumask_locks", 00:07:33.525 "framework_wait_init", 00:07:33.525 "framework_start_init", 00:07:33.525 "scsi_get_devices", 00:07:33.525 "bdev_get_histogram", 00:07:33.525 "bdev_enable_histogram", 00:07:33.525 "bdev_set_qos_limit", 00:07:33.525 "bdev_set_qd_sampling_period", 00:07:33.525 "bdev_get_bdevs", 00:07:33.525 "bdev_reset_iostat", 00:07:33.525 "bdev_get_iostat", 00:07:33.525 "bdev_examine", 00:07:33.525 "bdev_wait_for_examine", 00:07:33.525 "bdev_set_options", 00:07:33.525 "accel_get_stats", 00:07:33.525 "accel_set_options", 00:07:33.525 "accel_set_driver", 00:07:33.525 "accel_crypto_key_destroy", 00:07:33.525 "accel_crypto_keys_get", 00:07:33.525 "accel_crypto_key_create", 00:07:33.525 "accel_assign_opc", 00:07:33.525 "accel_get_module_info", 00:07:33.525 "accel_get_opc_assignments", 00:07:33.525 "vmd_rescan", 00:07:33.525 "vmd_remove_device", 00:07:33.525 "vmd_enable", 00:07:33.525 "sock_get_default_impl", 00:07:33.525 "sock_set_default_impl", 00:07:33.525 "sock_impl_set_options", 00:07:33.525 "sock_impl_get_options", 00:07:33.525 "iobuf_get_stats", 00:07:33.525 "iobuf_set_options", 00:07:33.525 "keyring_get_keys", 00:07:33.525 "vfu_tgt_set_base_path", 00:07:33.525 "framework_get_pci_devices", 00:07:33.525 "framework_get_config", 00:07:33.525 "framework_get_subsystems", 00:07:33.525 "fsdev_set_opts", 00:07:33.525 "fsdev_get_opts", 00:07:33.525 "trace_get_info", 00:07:33.525 "trace_get_tpoint_group_mask", 00:07:33.525 "trace_disable_tpoint_group", 00:07:33.525 "trace_enable_tpoint_group", 00:07:33.525 "trace_clear_tpoint_mask", 00:07:33.525 "trace_set_tpoint_mask", 00:07:33.525 "notify_get_notifications", 00:07:33.525 "notify_get_types", 00:07:33.525 "spdk_get_version", 00:07:33.525 "rpc_get_methods" 00:07:33.525 ] 00:07:33.525 12:51:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.525 12:51:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:33.525 12:51:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 689464 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 689464 ']' 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 689464 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689464 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689464' 00:07:33.525 killing process with pid 689464 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 689464 00:07:33.525 12:51:36 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 689464 00:07:33.785 00:07:33.785 real 0m1.541s 00:07:33.785 user 0m2.782s 00:07:33.785 sys 0m0.496s 00:07:33.785 12:51:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.785 12:51:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.785 ************************************ 00:07:33.785 END TEST spdkcli_tcp 00:07:33.785 ************************************ 00:07:33.785 12:51:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:33.785 12:51:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.785 12:51:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.786 12:51:36 -- common/autotest_common.sh@10 -- # set +x 00:07:33.786 ************************************ 00:07:33.786 START TEST dpdk_mem_utility 00:07:33.786 ************************************ 00:07:33.786 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:34.045 * Looking for test storage... 00:07:34.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:34.045 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:34.045 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:34.045 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:34.045 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.045 12:51:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.046 12:51:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.046 --rc genhtml_branch_coverage=1 00:07:34.046 --rc genhtml_function_coverage=1 00:07:34.046 --rc genhtml_legend=1 00:07:34.046 --rc geninfo_all_blocks=1 00:07:34.046 --rc geninfo_unexecuted_blocks=1 00:07:34.046 00:07:34.046 ' 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.046 --rc genhtml_branch_coverage=1 00:07:34.046 --rc genhtml_function_coverage=1 00:07:34.046 --rc genhtml_legend=1 00:07:34.046 --rc geninfo_all_blocks=1 00:07:34.046 --rc geninfo_unexecuted_blocks=1 00:07:34.046 00:07:34.046 ' 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.046 --rc genhtml_branch_coverage=1 00:07:34.046 --rc genhtml_function_coverage=1 00:07:34.046 --rc genhtml_legend=1 00:07:34.046 --rc geninfo_all_blocks=1 00:07:34.046 --rc geninfo_unexecuted_blocks=1 00:07:34.046 00:07:34.046 ' 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.046 --rc genhtml_branch_coverage=1 00:07:34.046 --rc genhtml_function_coverage=1 00:07:34.046 --rc genhtml_legend=1 00:07:34.046 --rc geninfo_all_blocks=1 00:07:34.046 --rc geninfo_unexecuted_blocks=1 00:07:34.046 00:07:34.046 ' 00:07:34.046 12:51:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:34.046 12:51:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=689830 00:07:34.046 12:51:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 689830 00:07:34.046 12:51:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 689830 ']' 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.046 12:51:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:34.046 [2024-11-29 12:51:36.711865] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:34.046 [2024-11-29 12:51:36.711936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689830 ] 00:07:34.306 [2024-11-29 12:51:36.799664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.306 [2024-11-29 12:51:36.834786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.875 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.876 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:34.876 12:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:34.876 12:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:34.876 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.876 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:34.876 { 00:07:34.876 "filename": "/tmp/spdk_mem_dump.txt" 00:07:34.876 } 00:07:34.876 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.876 12:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:35.136 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:35.136 1 heaps totaling size 818.000000 MiB 00:07:35.136 size: 818.000000 MiB heap id: 0 00:07:35.136 end heaps---------- 00:07:35.136 9 mempools totaling size 603.782043 MiB 00:07:35.136 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:35.136 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:35.136 size: 100.555481 MiB name: bdev_io_689830 00:07:35.136 size: 50.003479 MiB name: msgpool_689830 00:07:35.136 size: 36.509338 MiB name: fsdev_io_689830 00:07:35.136 size: 21.763794 MiB name: PDU_Pool 00:07:35.136 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:35.136 size: 4.133484 MiB name: evtpool_689830 00:07:35.136 size: 0.026123 MiB name: Session_Pool 00:07:35.136 end mempools------- 00:07:35.136 6 memzones totaling size 4.142822 MiB 00:07:35.136 size: 1.000366 MiB name: RG_ring_0_689830 00:07:35.136 size: 1.000366 MiB name: RG_ring_1_689830 00:07:35.136 size: 1.000366 MiB name: RG_ring_4_689830 00:07:35.137 size: 1.000366 MiB name: RG_ring_5_689830 00:07:35.137 size: 0.125366 MiB name: RG_ring_2_689830 00:07:35.137 size: 0.015991 MiB name: RG_ring_3_689830 00:07:35.137 end memzones------- 00:07:35.137 12:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:35.137 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:35.137 list of free elements. size: 10.852478 MiB 00:07:35.137 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:35.137 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:35.137 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:35.137 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:35.137 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:35.137 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:35.137 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:35.137 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:35.137 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:07:35.137 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:35.137 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:35.137 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:35.137 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:35.137 element at address: 0x200028200000 with size: 0.410034 MiB 00:07:35.137 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:35.137 list of standard malloc elements. size: 199.218628 MiB 00:07:35.137 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:35.137 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:35.137 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:35.137 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:35.137 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:35.137 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:35.137 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:35.137 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:35.137 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:35.137 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:35.137 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:35.137 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:35.137 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:35.137 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:35.137 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:35.137 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:35.137 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:35.137 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:35.137 element at address: 0x200028268f80 with size: 0.000183 MiB 00:07:35.137 element at address: 0x200028269040 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:35.137 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:35.137 list of memzone associated elements. size: 607.928894 MiB 00:07:35.137 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:35.137 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:35.137 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:35.137 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:35.137 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:35.137 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_689830_0 00:07:35.137 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:35.137 associated memzone info: size: 48.002930 MiB name: MP_msgpool_689830_0 00:07:35.137 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:35.137 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_689830_0 00:07:35.137 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:35.137 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:35.137 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:35.137 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:35.137 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:35.137 associated memzone info: size: 3.000122 MiB name: MP_evtpool_689830_0 00:07:35.137 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:35.137 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_689830 00:07:35.137 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:35.137 associated memzone info: size: 1.007996 MiB name: MP_evtpool_689830 00:07:35.137 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:35.137 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:35.137 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:35.137 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:35.137 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:35.137 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:35.137 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:35.137 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:35.137 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:35.137 associated memzone info: size: 1.000366 MiB name: RG_ring_0_689830 00:07:35.137 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:35.137 associated memzone info: size: 1.000366 MiB name: RG_ring_1_689830 00:07:35.137 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:35.137 associated memzone info: size: 1.000366 MiB name: RG_ring_4_689830 00:07:35.137 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:35.137 associated memzone info: size: 1.000366 MiB name: RG_ring_5_689830 00:07:35.137 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:35.137 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_689830 00:07:35.137 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:35.137 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_689830 00:07:35.137 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:35.137 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:35.137 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:35.137 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:35.137 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:35.137 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:35.137 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:35.137 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_689830 00:07:35.137 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:35.137 associated memzone info: size: 0.125366 MiB name: RG_ring_2_689830 00:07:35.137 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:35.137 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:35.137 element at address: 0x200028269100 with size: 0.023743 MiB 00:07:35.137 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:35.137 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:35.137 associated memzone info: size: 0.015991 MiB name: RG_ring_3_689830 00:07:35.137 element at address: 0x20002826f240 with size: 0.002441 MiB 00:07:35.137 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:35.137 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:35.137 associated memzone info: size: 0.000183 MiB name: MP_msgpool_689830 00:07:35.137 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:35.137 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_689830 00:07:35.137 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:35.137 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_689830 00:07:35.137 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:07:35.137 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:35.137 12:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:35.137 12:51:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 689830 00:07:35.137 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 689830 ']' 00:07:35.137 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 689830 00:07:35.137 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:35.137 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.137 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689830 00:07:35.137 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.137 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.137 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689830' 00:07:35.137 killing process with pid 689830 00:07:35.137 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 689830 00:07:35.138 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 689830 00:07:35.399 00:07:35.399 real 0m1.414s 00:07:35.399 user 0m1.491s 00:07:35.399 sys 0m0.417s 00:07:35.399 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.399 12:51:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:35.399 ************************************ 00:07:35.399 END TEST dpdk_mem_utility 00:07:35.399 ************************************ 00:07:35.399 12:51:37 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:35.399 12:51:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.399 12:51:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.399 12:51:37 -- common/autotest_common.sh@10 -- # set +x 00:07:35.399 ************************************ 00:07:35.399 START TEST event 00:07:35.399 ************************************ 00:07:35.399 12:51:37 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:35.399 * Looking for test storage... 00:07:35.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:35.399 12:51:38 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.399 12:51:38 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.399 12:51:38 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.660 12:51:38 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.660 12:51:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.660 12:51:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.661 12:51:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.661 12:51:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.661 12:51:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.661 12:51:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.661 12:51:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.661 12:51:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.661 12:51:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.661 12:51:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.661 12:51:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.661 12:51:38 event -- scripts/common.sh@344 -- # case "$op" in 00:07:35.661 12:51:38 event -- scripts/common.sh@345 -- # : 1 00:07:35.661 12:51:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.661 12:51:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.661 12:51:38 event -- scripts/common.sh@365 -- # decimal 1 00:07:35.661 12:51:38 event -- scripts/common.sh@353 -- # local d=1 00:07:35.661 12:51:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.661 12:51:38 event -- scripts/common.sh@355 -- # echo 1 00:07:35.661 12:51:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.661 12:51:38 event -- scripts/common.sh@366 -- # decimal 2 00:07:35.661 12:51:38 event -- scripts/common.sh@353 -- # local d=2 00:07:35.661 12:51:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.661 12:51:38 event -- scripts/common.sh@355 -- # echo 2 00:07:35.661 12:51:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.661 12:51:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.661 12:51:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.661 12:51:38 event -- scripts/common.sh@368 -- # return 0 00:07:35.661 12:51:38 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.661 12:51:38 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.661 --rc genhtml_branch_coverage=1 00:07:35.661 --rc genhtml_function_coverage=1 00:07:35.661 --rc genhtml_legend=1 00:07:35.661 --rc geninfo_all_blocks=1 00:07:35.661 --rc geninfo_unexecuted_blocks=1 00:07:35.661 00:07:35.661 ' 00:07:35.661 12:51:38 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.661 --rc genhtml_branch_coverage=1 00:07:35.661 --rc genhtml_function_coverage=1 00:07:35.661 --rc genhtml_legend=1 00:07:35.661 --rc geninfo_all_blocks=1 00:07:35.661 --rc geninfo_unexecuted_blocks=1 00:07:35.661 00:07:35.661 ' 00:07:35.661 12:51:38 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.661 --rc genhtml_branch_coverage=1 00:07:35.661 --rc genhtml_function_coverage=1 00:07:35.661 --rc genhtml_legend=1 00:07:35.661 --rc geninfo_all_blocks=1 00:07:35.661 --rc geninfo_unexecuted_blocks=1 00:07:35.661 00:07:35.661 ' 00:07:35.661 12:51:38 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.661 --rc genhtml_branch_coverage=1 00:07:35.661 --rc genhtml_function_coverage=1 00:07:35.661 --rc genhtml_legend=1 00:07:35.661 --rc geninfo_all_blocks=1 00:07:35.661 --rc geninfo_unexecuted_blocks=1 00:07:35.661 00:07:35.661 ' 00:07:35.661 12:51:38 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:35.661 12:51:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:35.661 12:51:38 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:35.661 12:51:38 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:35.661 12:51:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.661 12:51:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.661 ************************************ 00:07:35.661 START TEST event_perf 00:07:35.661 ************************************ 00:07:35.661 12:51:38 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:35.661 Running I/O for 1 seconds...[2024-11-29 12:51:38.206783] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:35.661 [2024-11-29 12:51:38.206901] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690143 ] 00:07:35.661 [2024-11-29 12:51:38.293882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.661 [2024-11-29 12:51:38.328917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.661 [2024-11-29 12:51:38.329067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.661 [2024-11-29 12:51:38.329219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.661 [2024-11-29 12:51:38.329406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.043 Running I/O for 1 seconds... 00:07:37.043 lcore 0: 177767 00:07:37.043 lcore 1: 177768 00:07:37.043 lcore 2: 177767 00:07:37.043 lcore 3: 177768 00:07:37.043 done. 00:07:37.043 00:07:37.043 real 0m1.173s 00:07:37.043 user 0m4.085s 00:07:37.043 sys 0m0.086s 00:07:37.043 12:51:39 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.043 12:51:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:37.043 ************************************ 00:07:37.043 END TEST event_perf 00:07:37.043 ************************************ 00:07:37.043 12:51:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:37.043 12:51:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:37.043 12:51:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.043 12:51:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.043 ************************************ 00:07:37.043 START TEST event_reactor 00:07:37.043 ************************************ 00:07:37.043 12:51:39 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:37.043 [2024-11-29 12:51:39.454831] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:37.043 [2024-11-29 12:51:39.454929] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690405 ] 00:07:37.043 [2024-11-29 12:51:39.543174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.043 [2024-11-29 12:51:39.579539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.985 test_start 00:07:37.985 oneshot 00:07:37.985 tick 100 00:07:37.985 tick 100 00:07:37.985 tick 250 00:07:37.985 tick 100 00:07:37.985 tick 100 00:07:37.985 tick 250 00:07:37.985 tick 100 00:07:37.985 tick 500 00:07:37.985 tick 100 00:07:37.985 tick 100 00:07:37.985 tick 250 00:07:37.985 tick 100 00:07:37.985 tick 100 00:07:37.985 test_end 00:07:37.985 00:07:37.985 real 0m1.175s 00:07:37.985 user 0m1.091s 00:07:37.985 sys 0m0.080s 00:07:37.985 12:51:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.985 12:51:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:37.985 ************************************ 00:07:37.985 END TEST event_reactor 00:07:37.985 ************************************ 00:07:37.985 12:51:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:37.985 12:51:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:37.985 12:51:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.985 12:51:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:38.245 ************************************ 00:07:38.245 START TEST event_reactor_perf 00:07:38.245 ************************************ 00:07:38.245 12:51:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:38.245 [2024-11-29 12:51:40.708819] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:38.245 [2024-11-29 12:51:40.708926] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690753 ] 00:07:38.245 [2024-11-29 12:51:40.795251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.245 [2024-11-29 12:51:40.825221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.185 test_start 00:07:39.185 test_end 00:07:39.185 Performance: 537722 events per second 00:07:39.185 00:07:39.185 real 0m1.165s 00:07:39.185 user 0m1.078s 00:07:39.185 sys 0m0.083s 00:07:39.185 12:51:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.185 12:51:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:39.185 ************************************ 00:07:39.185 END TEST event_reactor_perf 00:07:39.185 ************************************ 00:07:39.446 12:51:41 event -- event/event.sh@49 -- # uname -s 00:07:39.446 12:51:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:39.446 12:51:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:39.446 12:51:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.446 12:51:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.446 12:51:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:39.446 ************************************ 00:07:39.446 START TEST event_scheduler 00:07:39.446 ************************************ 00:07:39.446 12:51:41 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:39.446 * Looking for test storage... 00:07:39.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:39.446 12:51:42 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.446 12:51:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.446 12:51:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.446 12:51:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:39.446 12:51:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.706 12:51:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:39.706 12:51:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.707 12:51:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:39.707 12:51:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:39.707 12:51:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.707 12:51:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:39.707 12:51:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.707 12:51:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.707 12:51:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.707 12:51:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.707 --rc genhtml_branch_coverage=1 00:07:39.707 --rc genhtml_function_coverage=1 00:07:39.707 --rc genhtml_legend=1 00:07:39.707 --rc geninfo_all_blocks=1 00:07:39.707 --rc geninfo_unexecuted_blocks=1 00:07:39.707 00:07:39.707 ' 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.707 --rc genhtml_branch_coverage=1 00:07:39.707 --rc genhtml_function_coverage=1 00:07:39.707 --rc genhtml_legend=1 00:07:39.707 --rc geninfo_all_blocks=1 00:07:39.707 --rc geninfo_unexecuted_blocks=1 00:07:39.707 00:07:39.707 ' 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.707 --rc genhtml_branch_coverage=1 00:07:39.707 --rc genhtml_function_coverage=1 00:07:39.707 --rc genhtml_legend=1 00:07:39.707 --rc geninfo_all_blocks=1 00:07:39.707 --rc geninfo_unexecuted_blocks=1 00:07:39.707 00:07:39.707 ' 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.707 --rc genhtml_branch_coverage=1 00:07:39.707 --rc genhtml_function_coverage=1 00:07:39.707 --rc genhtml_legend=1 00:07:39.707 --rc geninfo_all_blocks=1 00:07:39.707 --rc geninfo_unexecuted_blocks=1 00:07:39.707 00:07:39.707 ' 00:07:39.707 12:51:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:39.707 12:51:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=691146 00:07:39.707 12:51:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:39.707 12:51:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 691146 00:07:39.707 12:51:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 691146 ']' 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.707 12:51:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:39.707 [2024-11-29 12:51:42.190218] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:39.707 [2024-11-29 12:51:42.190282] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691146 ] 00:07:39.707 [2024-11-29 12:51:42.284182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.707 [2024-11-29 12:51:42.339970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.707 [2024-11-29 12:51:42.340133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.707 [2024-11-29 12:51:42.340284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.707 [2024-11-29 12:51:42.340462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:40.647 12:51:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 [2024-11-29 12:51:43.011070] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:40.647 [2024-11-29 12:51:43.011088] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:40.647 [2024-11-29 12:51:43.011098] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:40.647 [2024-11-29 12:51:43.011104] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:40.647 [2024-11-29 12:51:43.011110] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 12:51:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 [2024-11-29 12:51:43.079461] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 12:51:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.647 12:51:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 ************************************ 00:07:40.647 START TEST scheduler_create_thread 00:07:40.647 ************************************ 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 2 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 3 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 4 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 5 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 6 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.647 7 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.647 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.648 8 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.648 9 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.648 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.219 10 00:07:41.219 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.219 12:51:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:41.219 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.219 12:51:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:42.606 12:51:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.606 12:51:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:42.606 12:51:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:42.606 12:51:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.606 12:51:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.181 12:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.181 12:51:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:43.181 12:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.181 12:51:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.120 12:51:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.120 12:51:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:44.120 12:51:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:44.120 12:51:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.120 12:51:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.698 12:51:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.698 00:07:44.698 real 0m4.225s 00:07:44.698 user 0m0.025s 00:07:44.698 sys 0m0.007s 00:07:44.698 12:51:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.698 12:51:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.698 ************************************ 00:07:44.698 END TEST scheduler_create_thread 00:07:44.698 ************************************ 00:07:44.958 12:51:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:44.958 12:51:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 691146 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 691146 ']' 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 691146 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 691146 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 691146' 00:07:44.958 killing process with pid 691146 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 691146 00:07:44.958 12:51:47 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 691146 00:07:44.958 [2024-11-29 12:51:47.625153] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:45.218 00:07:45.218 real 0m5.847s 00:07:45.218 user 0m12.890s 00:07:45.218 sys 0m0.442s 00:07:45.218 12:51:47 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.218 12:51:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:45.218 ************************************ 00:07:45.218 END TEST event_scheduler 00:07:45.218 ************************************ 00:07:45.218 12:51:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:45.218 12:51:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:45.218 12:51:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.218 12:51:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.218 12:51:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.218 ************************************ 00:07:45.218 START TEST app_repeat 00:07:45.218 ************************************ 00:07:45.218 12:51:47 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=692213 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 692213' 00:07:45.218 Process app_repeat pid: 692213 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:45.218 spdk_app_start Round 0 00:07:45.218 12:51:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 692213 /var/tmp/spdk-nbd.sock 00:07:45.219 12:51:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 692213 ']' 00:07:45.219 12:51:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:45.219 12:51:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.219 12:51:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:45.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:45.219 12:51:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.219 12:51:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:45.479 [2024-11-29 12:51:47.900830] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:07:45.479 [2024-11-29 12:51:47.900897] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692213 ] 00:07:45.479 [2024-11-29 12:51:47.982083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:45.479 [2024-11-29 12:51:48.013546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.479 [2024-11-29 12:51:48.013547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.479 12:51:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.479 12:51:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:45.479 12:51:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:45.739 Malloc0 00:07:45.739 12:51:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:45.999 Malloc1 00:07:45.999 12:51:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:45.999 /dev/nbd0 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:45.999 12:51:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.999 1+0 records in 00:07:45.999 1+0 records out 00:07:45.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302524 s, 13.5 MB/s 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:45.999 12:51:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:46.259 /dev/nbd1 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:46.259 1+0 records in 00:07:46.259 1+0 records out 00:07:46.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287018 s, 14.3 MB/s 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.259 12:51:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.259 12:51:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:46.520 { 00:07:46.520 "nbd_device": "/dev/nbd0", 00:07:46.520 "bdev_name": "Malloc0" 00:07:46.520 }, 00:07:46.520 { 00:07:46.520 "nbd_device": "/dev/nbd1", 00:07:46.520 "bdev_name": "Malloc1" 00:07:46.520 } 00:07:46.520 ]' 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:46.520 { 00:07:46.520 "nbd_device": "/dev/nbd0", 00:07:46.520 "bdev_name": "Malloc0" 00:07:46.520 }, 00:07:46.520 { 00:07:46.520 "nbd_device": "/dev/nbd1", 00:07:46.520 "bdev_name": "Malloc1" 00:07:46.520 } 00:07:46.520 ]' 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:46.520 /dev/nbd1' 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:46.520 /dev/nbd1' 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:46.520 256+0 records in 00:07:46.520 256+0 records out 00:07:46.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128105 s, 81.9 MB/s 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:46.520 256+0 records in 00:07:46.520 256+0 records out 00:07:46.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125089 s, 83.8 MB/s 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:46.520 256+0 records in 00:07:46.520 256+0 records out 00:07:46.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127662 s, 82.1 MB/s 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.520 12:51:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.781 12:51:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.042 12:51:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:47.304 12:51:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:47.304 12:51:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:47.564 12:51:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:47.564 [2024-11-29 12:51:50.120874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.564 [2024-11-29 12:51:50.150980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.564 [2024-11-29 12:51:50.150980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.564 [2024-11-29 12:51:50.179931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:47.564 [2024-11-29 12:51:50.179960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:50.862 12:51:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:50.862 12:51:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:50.862 spdk_app_start Round 1 00:07:50.862 12:51:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 692213 /var/tmp/spdk-nbd.sock 00:07:50.862 12:51:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 692213 ']' 00:07:50.862 12:51:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.862 12:51:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.862 12:51:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.862 12:51:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.862 12:51:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.862 12:51:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.862 12:51:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:50.862 12:51:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:50.862 Malloc0 00:07:50.862 12:51:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:51.123 Malloc1 00:07:51.123 12:51:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.123 12:51:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:51.123 /dev/nbd0 00:07:51.383 12:51:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:51.384 12:51:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:51.384 1+0 records in 00:07:51.384 1+0 records out 00:07:51.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284378 s, 14.4 MB/s 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:51.384 12:51:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:51.384 12:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.384 12:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.384 12:51:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:51.384 /dev/nbd1 00:07:51.384 12:51:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:51.384 12:51:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:51.384 1+0 records in 00:07:51.384 1+0 records out 00:07:51.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273345 s, 15.0 MB/s 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:51.384 12:51:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:51.384 12:51:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.384 12:51:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.384 12:51:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.384 12:51:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:51.644 { 00:07:51.644 "nbd_device": "/dev/nbd0", 00:07:51.644 "bdev_name": "Malloc0" 00:07:51.644 }, 00:07:51.644 { 00:07:51.644 "nbd_device": "/dev/nbd1", 00:07:51.644 "bdev_name": "Malloc1" 00:07:51.644 } 00:07:51.644 ]' 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:51.644 { 00:07:51.644 "nbd_device": "/dev/nbd0", 00:07:51.644 "bdev_name": "Malloc0" 00:07:51.644 }, 00:07:51.644 { 00:07:51.644 "nbd_device": "/dev/nbd1", 00:07:51.644 "bdev_name": "Malloc1" 00:07:51.644 } 00:07:51.644 ]' 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:51.644 /dev/nbd1' 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:51.644 /dev/nbd1' 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:51.644 256+0 records in 00:07:51.644 256+0 records out 00:07:51.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012736 s, 82.3 MB/s 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.644 12:51:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:51.903 256+0 records in 00:07:51.903 256+0 records out 00:07:51.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121674 s, 86.2 MB/s 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:51.903 256+0 records in 00:07:51.903 256+0 records out 00:07:51.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131959 s, 79.5 MB/s 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:51.903 12:51:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.904 12:51:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.164 12:51:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:52.424 12:51:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:52.424 12:51:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:52.683 12:51:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:52.683 [2024-11-29 12:51:55.244416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.683 [2024-11-29 12:51:55.274492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.683 [2024-11-29 12:51:55.274493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.683 [2024-11-29 12:51:55.304070] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:52.683 [2024-11-29 12:51:55.304098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:55.982 12:51:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:55.982 12:51:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:55.982 spdk_app_start Round 2 00:07:55.982 12:51:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 692213 /var/tmp/spdk-nbd.sock 00:07:55.982 12:51:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 692213 ']' 00:07:55.982 12:51:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:55.982 12:51:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.982 12:51:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:55.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:55.982 12:51:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.982 12:51:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:55.982 12:51:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.982 12:51:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:55.982 12:51:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:55.982 Malloc0 00:07:55.982 12:51:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:56.243 Malloc1 00:07:56.243 12:51:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:56.243 12:51:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:56.243 /dev/nbd0 00:07:56.504 12:51:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:56.504 12:51:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:56.504 1+0 records in 00:07:56.504 1+0 records out 00:07:56.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206374 s, 19.8 MB/s 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:56.504 12:51:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:56.504 12:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.504 12:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:56.504 12:51:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:56.504 /dev/nbd1 00:07:56.504 12:51:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:56.504 12:51:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:56.504 1+0 records in 00:07:56.504 1+0 records out 00:07:56.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276312 s, 14.8 MB/s 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:56.504 12:51:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:56.505 12:51:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:56.505 12:51:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:56.505 12:51:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:56.505 12:51:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.505 12:51:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:56.766 { 00:07:56.766 "nbd_device": "/dev/nbd0", 00:07:56.766 "bdev_name": "Malloc0" 00:07:56.766 }, 00:07:56.766 { 00:07:56.766 "nbd_device": "/dev/nbd1", 00:07:56.766 "bdev_name": "Malloc1" 00:07:56.766 } 00:07:56.766 ]' 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:56.766 { 00:07:56.766 "nbd_device": "/dev/nbd0", 00:07:56.766 "bdev_name": "Malloc0" 00:07:56.766 }, 00:07:56.766 { 00:07:56.766 "nbd_device": "/dev/nbd1", 00:07:56.766 "bdev_name": "Malloc1" 00:07:56.766 } 00:07:56.766 ]' 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:56.766 /dev/nbd1' 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:56.766 /dev/nbd1' 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:56.766 256+0 records in 00:07:56.766 256+0 records out 00:07:56.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127224 s, 82.4 MB/s 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.766 12:51:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:57.028 256+0 records in 00:07:57.028 256+0 records out 00:07:57.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124861 s, 84.0 MB/s 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:57.028 256+0 records in 00:07:57.028 256+0 records out 00:07:57.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132102 s, 79.4 MB/s 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.028 12:51:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:57.289 12:51:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:57.289 12:51:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:57.289 12:51:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:57.289 12:51:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.290 12:51:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.290 12:51:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:57.290 12:51:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:57.290 12:51:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.290 12:51:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.290 12:51:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.290 12:51:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:57.550 12:52:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:57.550 12:52:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:57.811 12:52:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:57.811 [2024-11-29 12:52:00.392128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.811 [2024-11-29 12:52:00.421071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.811 [2024-11-29 12:52:00.421071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.811 [2024-11-29 12:52:00.449986] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:57.811 [2024-11-29 12:52:00.450022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:01.111 12:52:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 692213 /var/tmp/spdk-nbd.sock 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 692213 ']' 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:01.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:01.111 12:52:03 event.app_repeat -- event/event.sh@39 -- # killprocess 692213 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 692213 ']' 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 692213 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 692213 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 692213' 00:08:01.111 killing process with pid 692213 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@973 -- # kill 692213 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@978 -- # wait 692213 00:08:01.111 spdk_app_start is called in Round 0. 00:08:01.111 Shutdown signal received, stop current app iteration 00:08:01.111 Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 reinitialization... 00:08:01.111 spdk_app_start is called in Round 1. 00:08:01.111 Shutdown signal received, stop current app iteration 00:08:01.111 Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 reinitialization... 00:08:01.111 spdk_app_start is called in Round 2. 00:08:01.111 Shutdown signal received, stop current app iteration 00:08:01.111 Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 reinitialization... 00:08:01.111 spdk_app_start is called in Round 3. 00:08:01.111 Shutdown signal received, stop current app iteration 00:08:01.111 12:52:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:01.111 12:52:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:01.111 00:08:01.111 real 0m15.790s 00:08:01.111 user 0m34.675s 00:08:01.111 sys 0m2.312s 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.111 12:52:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:01.111 ************************************ 00:08:01.111 END TEST app_repeat 00:08:01.111 ************************************ 00:08:01.111 12:52:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:01.111 12:52:03 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:01.111 12:52:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.111 12:52:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.111 12:52:03 event -- common/autotest_common.sh@10 -- # set +x 00:08:01.111 ************************************ 00:08:01.111 START TEST cpu_locks 00:08:01.111 ************************************ 00:08:01.111 12:52:03 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:01.372 * Looking for test storage... 00:08:01.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:01.372 12:52:03 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.373 12:52:03 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.373 --rc genhtml_branch_coverage=1 00:08:01.373 --rc genhtml_function_coverage=1 00:08:01.373 --rc genhtml_legend=1 00:08:01.373 --rc geninfo_all_blocks=1 00:08:01.373 --rc geninfo_unexecuted_blocks=1 00:08:01.373 00:08:01.373 ' 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.373 --rc genhtml_branch_coverage=1 00:08:01.373 --rc genhtml_function_coverage=1 00:08:01.373 --rc genhtml_legend=1 00:08:01.373 --rc geninfo_all_blocks=1 00:08:01.373 --rc geninfo_unexecuted_blocks=1 00:08:01.373 00:08:01.373 ' 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.373 --rc genhtml_branch_coverage=1 00:08:01.373 --rc genhtml_function_coverage=1 00:08:01.373 --rc genhtml_legend=1 00:08:01.373 --rc geninfo_all_blocks=1 00:08:01.373 --rc geninfo_unexecuted_blocks=1 00:08:01.373 00:08:01.373 ' 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.373 --rc genhtml_branch_coverage=1 00:08:01.373 --rc genhtml_function_coverage=1 00:08:01.373 --rc genhtml_legend=1 00:08:01.373 --rc geninfo_all_blocks=1 00:08:01.373 --rc geninfo_unexecuted_blocks=1 00:08:01.373 00:08:01.373 ' 00:08:01.373 12:52:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:01.373 12:52:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:01.373 12:52:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:01.373 12:52:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.373 12:52:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.373 ************************************ 00:08:01.373 START TEST default_locks 00:08:01.373 ************************************ 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=695799 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 695799 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 695799 ']' 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.373 12:52:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.373 [2024-11-29 12:52:04.026312] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:01.373 [2024-11-29 12:52:04.026361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695799 ] 00:08:01.634 [2024-11-29 12:52:04.079068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.634 [2024-11-29 12:52:04.111336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.634 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.634 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:01.634 12:52:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 695799 00:08:01.634 12:52:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 695799 00:08:01.634 12:52:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:02.204 lslocks: write error 00:08:02.204 12:52:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 695799 00:08:02.204 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 695799 ']' 00:08:02.204 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 695799 00:08:02.204 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:02.204 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.204 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 695799 00:08:02.204 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.205 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.205 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 695799' 00:08:02.205 killing process with pid 695799 00:08:02.205 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 695799 00:08:02.205 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 695799 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 695799 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 695799 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 695799 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 695799 ']' 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (695799) - No such process 00:08:02.466 ERROR: process (pid: 695799) is no longer running 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:02.466 00:08:02.466 real 0m0.932s 00:08:02.466 user 0m0.960s 00:08:02.466 sys 0m0.455s 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.466 12:52:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.466 ************************************ 00:08:02.466 END TEST default_locks 00:08:02.466 ************************************ 00:08:02.466 12:52:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:02.466 12:52:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.466 12:52:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.466 12:52:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.466 ************************************ 00:08:02.466 START TEST default_locks_via_rpc 00:08:02.466 ************************************ 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=695869 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 695869 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 695869 ']' 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.466 12:52:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.466 [2024-11-29 12:52:05.038431] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:02.466 [2024-11-29 12:52:05.038491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695869 ] 00:08:02.466 [2024-11-29 12:52:05.124882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.726 [2024-11-29 12:52:05.159823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.297 12:52:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.297 12:52:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:03.297 12:52:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:03.297 12:52:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 695869 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 695869 00:08:03.298 12:52:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 695869 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 695869 ']' 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 695869 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 695869 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 695869' 00:08:03.869 killing process with pid 695869 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 695869 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 695869 00:08:03.869 00:08:03.869 real 0m1.517s 00:08:03.869 user 0m1.635s 00:08:03.869 sys 0m0.527s 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.869 12:52:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.869 ************************************ 00:08:03.869 END TEST default_locks_via_rpc 00:08:03.869 ************************************ 00:08:03.869 12:52:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:03.869 12:52:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.869 12:52:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.869 12:52:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.129 ************************************ 00:08:04.129 START TEST non_locking_app_on_locked_coremask 00:08:04.129 ************************************ 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=696208 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 696208 /var/tmp/spdk.sock 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 696208 ']' 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.129 12:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.129 [2024-11-29 12:52:06.627710] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:04.129 [2024-11-29 12:52:06.627766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696208 ] 00:08:04.129 [2024-11-29 12:52:06.712311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.129 [2024-11-29 12:52:06.745048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=696534 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 696534 /var/tmp/spdk2.sock 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 696534 ']' 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.068 12:52:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.068 [2024-11-29 12:52:07.451251] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:05.068 [2024-11-29 12:52:07.451303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696534 ] 00:08:05.068 [2024-11-29 12:52:07.540350] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:05.068 [2024-11-29 12:52:07.540374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.068 [2024-11-29 12:52:07.598889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.639 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.639 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:05.639 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 696208 00:08:05.639 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 696208 00:08:05.639 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:06.208 lslocks: write error 00:08:06.208 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 696208 00:08:06.208 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 696208 ']' 00:08:06.208 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 696208 00:08:06.208 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:06.208 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.208 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696208 00:08:06.468 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.468 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.468 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696208' 00:08:06.468 killing process with pid 696208 00:08:06.468 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 696208 00:08:06.468 12:52:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 696208 00:08:06.728 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 696534 00:08:06.728 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 696534 ']' 00:08:06.728 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 696534 00:08:06.728 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:06.728 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.729 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696534 00:08:06.729 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.729 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.729 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696534' 00:08:06.729 killing process with pid 696534 00:08:06.729 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 696534 00:08:06.729 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 696534 00:08:06.989 00:08:06.989 real 0m2.934s 00:08:06.989 user 0m3.274s 00:08:06.989 sys 0m0.863s 00:08:06.989 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.989 12:52:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:06.989 ************************************ 00:08:06.989 END TEST non_locking_app_on_locked_coremask 00:08:06.989 ************************************ 00:08:06.989 12:52:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:06.989 12:52:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.989 12:52:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.989 12:52:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:06.989 ************************************ 00:08:06.989 START TEST locking_app_on_unlocked_coremask 00:08:06.989 ************************************ 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=696909 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 696909 /var/tmp/spdk.sock 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 696909 ']' 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.989 12:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:06.989 [2024-11-29 12:52:09.635871] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:06.989 [2024-11-29 12:52:09.635922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696909 ] 00:08:07.249 [2024-11-29 12:52:09.718620] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:07.249 [2024-11-29 12:52:09.718646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.249 [2024-11-29 12:52:09.749043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=697157 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 697157 /var/tmp/spdk2.sock 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 697157 ']' 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:07.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.819 12:52:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.819 [2024-11-29 12:52:10.492370] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:07.819 [2024-11-29 12:52:10.492428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697157 ] 00:08:08.080 [2024-11-29 12:52:10.582179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.080 [2024-11-29 12:52:10.644181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.650 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.650 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:08.650 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 697157 00:08:08.650 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 697157 00:08:08.650 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:08.910 lslocks: write error 00:08:08.911 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 696909 00:08:08.911 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 696909 ']' 00:08:08.911 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 696909 00:08:08.911 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:08.911 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.911 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696909 00:08:09.171 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.171 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.171 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696909' 00:08:09.171 killing process with pid 696909 00:08:09.171 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 696909 00:08:09.171 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 696909 00:08:09.431 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 697157 00:08:09.431 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 697157 ']' 00:08:09.431 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 697157 00:08:09.431 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:09.431 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.431 12:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 697157 00:08:09.431 12:52:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.431 12:52:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.431 12:52:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 697157' 00:08:09.431 killing process with pid 697157 00:08:09.431 12:52:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 697157 00:08:09.431 12:52:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 697157 00:08:09.692 00:08:09.692 real 0m2.659s 00:08:09.692 user 0m2.962s 00:08:09.692 sys 0m0.810s 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.692 ************************************ 00:08:09.692 END TEST locking_app_on_unlocked_coremask 00:08:09.692 ************************************ 00:08:09.692 12:52:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:09.692 12:52:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.692 12:52:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.692 12:52:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.692 ************************************ 00:08:09.692 START TEST locking_app_on_locked_coremask 00:08:09.692 ************************************ 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=697616 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 697616 /var/tmp/spdk.sock 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 697616 ']' 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.692 12:52:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.952 [2024-11-29 12:52:12.373069] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:09.952 [2024-11-29 12:52:12.373125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697616 ] 00:08:09.952 [2024-11-29 12:52:12.459145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.952 [2024-11-29 12:52:12.492179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=697632 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 697632 /var/tmp/spdk2.sock 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 697632 /var/tmp/spdk2.sock 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 697632 /var/tmp/spdk2.sock 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 697632 ']' 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.522 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.782 [2024-11-29 12:52:13.220934] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:10.782 [2024-11-29 12:52:13.220989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697632 ] 00:08:10.782 [2024-11-29 12:52:13.308981] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 697616 has claimed it. 00:08:10.782 [2024-11-29 12:52:13.309011] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:11.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (697632) - No such process 00:08:11.353 ERROR: process (pid: 697632) is no longer running 00:08:11.353 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.353 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:11.353 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:11.353 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.353 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.353 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.353 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 697616 00:08:11.353 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 697616 00:08:11.353 12:52:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.615 lslocks: write error 00:08:11.615 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 697616 00:08:11.615 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 697616 ']' 00:08:11.615 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 697616 00:08:11.615 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:11.875 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.875 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 697616 00:08:11.875 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.875 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.875 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 697616' 00:08:11.875 killing process with pid 697616 00:08:11.875 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 697616 00:08:11.875 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 697616 00:08:11.875 00:08:11.875 real 0m2.225s 00:08:11.875 user 0m2.545s 00:08:11.875 sys 0m0.608s 00:08:11.875 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.875 12:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.875 ************************************ 00:08:11.875 END TEST locking_app_on_locked_coremask 00:08:11.875 ************************************ 00:08:12.136 12:52:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:12.136 12:52:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.136 12:52:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.136 12:52:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.136 ************************************ 00:08:12.136 START TEST locking_overlapped_coremask 00:08:12.136 ************************************ 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=697995 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 697995 /var/tmp/spdk.sock 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 697995 ']' 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.136 12:52:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.136 [2024-11-29 12:52:14.678212] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:12.136 [2024-11-29 12:52:14.678264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697995 ] 00:08:12.136 [2024-11-29 12:52:14.761764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.136 [2024-11-29 12:52:14.796122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.136 [2024-11-29 12:52:14.796274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.136 [2024-11-29 12:52:14.796378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=698227 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 698227 /var/tmp/spdk2.sock 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 698227 /var/tmp/spdk2.sock 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 698227 /var/tmp/spdk2.sock 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 698227 ']' 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.079 12:52:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.079 [2024-11-29 12:52:15.524989] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:13.079 [2024-11-29 12:52:15.525044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698227 ] 00:08:13.079 [2024-11-29 12:52:15.637696] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 697995 has claimed it. 00:08:13.079 [2024-11-29 12:52:15.637737] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:13.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (698227) - No such process 00:08:13.651 ERROR: process (pid: 698227) is no longer running 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 697995 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 697995 ']' 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 697995 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 697995 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 697995' 00:08:13.651 killing process with pid 697995 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 697995 00:08:13.651 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 697995 00:08:13.913 00:08:13.913 real 0m1.785s 00:08:13.913 user 0m5.153s 00:08:13.913 sys 0m0.393s 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.913 ************************************ 00:08:13.913 END TEST locking_overlapped_coremask 00:08:13.913 ************************************ 00:08:13.913 12:52:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:13.913 12:52:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.913 12:52:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.913 12:52:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.913 ************************************ 00:08:13.913 START TEST locking_overlapped_coremask_via_rpc 00:08:13.913 ************************************ 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=698372 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 698372 /var/tmp/spdk.sock 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 698372 ']' 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.913 12:52:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.913 [2024-11-29 12:52:16.536296] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:13.913 [2024-11-29 12:52:16.536352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698372 ] 00:08:14.180 [2024-11-29 12:52:16.619693] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:14.180 [2024-11-29 12:52:16.619722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.180 [2024-11-29 12:52:16.659087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.180 [2024-11-29 12:52:16.659241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.180 [2024-11-29 12:52:16.659378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=698704 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 698704 /var/tmp/spdk2.sock 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 698704 ']' 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:14.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.852 12:52:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.852 [2024-11-29 12:52:17.386453] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:14.852 [2024-11-29 12:52:17.386510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698704 ] 00:08:14.852 [2024-11-29 12:52:17.496354] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:14.852 [2024-11-29 12:52:17.496385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.113 [2024-11-29 12:52:17.570198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.113 [2024-11-29 12:52:17.577239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.113 [2024-11-29 12:52:17.577240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.686 [2024-11-29 12:52:18.194235] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 698372 has claimed it. 00:08:15.686 request: 00:08:15.686 { 00:08:15.686 "method": "framework_enable_cpumask_locks", 00:08:15.686 "req_id": 1 00:08:15.686 } 00:08:15.686 Got JSON-RPC error response 00:08:15.686 response: 00:08:15.686 { 00:08:15.686 "code": -32603, 00:08:15.686 "message": "Failed to claim CPU core: 2" 00:08:15.686 } 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 698372 /var/tmp/spdk.sock 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 698372 ']' 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.686 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.947 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.947 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 698704 /var/tmp/spdk2.sock 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 698704 ']' 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:15.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:15.948 00:08:15.948 real 0m2.093s 00:08:15.948 user 0m0.870s 00:08:15.948 sys 0m0.152s 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.948 12:52:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.948 ************************************ 00:08:15.948 END TEST locking_overlapped_coremask_via_rpc 00:08:15.948 ************************************ 00:08:15.948 12:52:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:15.948 12:52:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 698372 ]] 00:08:15.948 12:52:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 698372 00:08:15.948 12:52:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 698372 ']' 00:08:15.948 12:52:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 698372 00:08:15.948 12:52:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:15.948 12:52:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.948 12:52:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 698372 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 698372' 00:08:16.209 killing process with pid 698372 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 698372 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 698372 00:08:16.209 12:52:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 698704 ]] 00:08:16.209 12:52:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 698704 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 698704 ']' 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 698704 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.209 12:52:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 698704 00:08:16.469 12:52:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:16.469 12:52:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:16.469 12:52:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 698704' 00:08:16.469 killing process with pid 698704 00:08:16.469 12:52:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 698704 00:08:16.469 12:52:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 698704 00:08:16.469 12:52:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:16.469 12:52:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:16.469 12:52:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 698372 ]] 00:08:16.469 12:52:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 698372 00:08:16.469 12:52:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 698372 ']' 00:08:16.470 12:52:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 698372 00:08:16.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (698372) - No such process 00:08:16.470 12:52:19 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 698372 is not found' 00:08:16.470 Process with pid 698372 is not found 00:08:16.470 12:52:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 698704 ]] 00:08:16.470 12:52:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 698704 00:08:16.470 12:52:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 698704 ']' 00:08:16.470 12:52:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 698704 00:08:16.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (698704) - No such process 00:08:16.470 12:52:19 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 698704 is not found' 00:08:16.470 Process with pid 698704 is not found 00:08:16.470 12:52:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:16.470 00:08:16.470 real 0m15.404s 00:08:16.470 user 0m27.461s 00:08:16.470 sys 0m4.753s 00:08:16.470 12:52:19 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.470 12:52:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.470 ************************************ 00:08:16.470 END TEST cpu_locks 00:08:16.470 ************************************ 00:08:16.732 00:08:16.732 real 0m41.232s 00:08:16.732 user 1m21.575s 00:08:16.732 sys 0m8.178s 00:08:16.732 12:52:19 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.732 12:52:19 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.732 ************************************ 00:08:16.732 END TEST event 00:08:16.732 ************************************ 00:08:16.732 12:52:19 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:16.732 12:52:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.732 12:52:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.732 12:52:19 -- common/autotest_common.sh@10 -- # set +x 00:08:16.732 ************************************ 00:08:16.732 START TEST thread 00:08:16.732 ************************************ 00:08:16.732 12:52:19 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:16.732 * Looking for test storage... 00:08:16.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:16.732 12:52:19 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.732 12:52:19 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.732 12:52:19 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:16.994 12:52:19 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:16.994 12:52:19 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.994 12:52:19 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.994 12:52:19 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.994 12:52:19 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.994 12:52:19 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.995 12:52:19 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.995 12:52:19 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.995 12:52:19 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.995 12:52:19 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.995 12:52:19 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.995 12:52:19 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.995 12:52:19 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:16.995 12:52:19 thread -- scripts/common.sh@345 -- # : 1 00:08:16.995 12:52:19 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.995 12:52:19 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.995 12:52:19 thread -- scripts/common.sh@365 -- # decimal 1 00:08:16.995 12:52:19 thread -- scripts/common.sh@353 -- # local d=1 00:08:16.995 12:52:19 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.995 12:52:19 thread -- scripts/common.sh@355 -- # echo 1 00:08:16.995 12:52:19 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.995 12:52:19 thread -- scripts/common.sh@366 -- # decimal 2 00:08:16.995 12:52:19 thread -- scripts/common.sh@353 -- # local d=2 00:08:16.995 12:52:19 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.995 12:52:19 thread -- scripts/common.sh@355 -- # echo 2 00:08:16.995 12:52:19 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.995 12:52:19 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.995 12:52:19 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.995 12:52:19 thread -- scripts/common.sh@368 -- # return 0 00:08:16.995 12:52:19 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.995 12:52:19 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:16.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.995 --rc genhtml_branch_coverage=1 00:08:16.995 --rc genhtml_function_coverage=1 00:08:16.995 --rc genhtml_legend=1 00:08:16.995 --rc geninfo_all_blocks=1 00:08:16.995 --rc geninfo_unexecuted_blocks=1 00:08:16.995 00:08:16.995 ' 00:08:16.995 12:52:19 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:16.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.995 --rc genhtml_branch_coverage=1 00:08:16.995 --rc genhtml_function_coverage=1 00:08:16.995 --rc genhtml_legend=1 00:08:16.995 --rc geninfo_all_blocks=1 00:08:16.995 --rc geninfo_unexecuted_blocks=1 00:08:16.995 00:08:16.995 ' 00:08:16.995 12:52:19 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:16.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.995 --rc genhtml_branch_coverage=1 00:08:16.995 --rc genhtml_function_coverage=1 00:08:16.995 --rc genhtml_legend=1 00:08:16.995 --rc geninfo_all_blocks=1 00:08:16.995 --rc geninfo_unexecuted_blocks=1 00:08:16.995 00:08:16.995 ' 00:08:16.995 12:52:19 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:16.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.995 --rc genhtml_branch_coverage=1 00:08:16.995 --rc genhtml_function_coverage=1 00:08:16.995 --rc genhtml_legend=1 00:08:16.995 --rc geninfo_all_blocks=1 00:08:16.995 --rc geninfo_unexecuted_blocks=1 00:08:16.995 00:08:16.995 ' 00:08:16.995 12:52:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:16.995 12:52:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:16.995 12:52:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.995 12:52:19 thread -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 ************************************ 00:08:16.995 START TEST thread_poller_perf 00:08:16.995 ************************************ 00:08:16.995 12:52:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:16.995 [2024-11-29 12:52:19.512053] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:16.995 [2024-11-29 12:52:19.512182] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699156 ] 00:08:16.995 [2024-11-29 12:52:19.601436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.995 [2024-11-29 12:52:19.632351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.995 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:18.380 [2024-11-29T11:52:21.060Z] ====================================== 00:08:18.380 [2024-11-29T11:52:21.060Z] busy:2405615006 (cyc) 00:08:18.380 [2024-11-29T11:52:21.060Z] total_run_count: 417000 00:08:18.380 [2024-11-29T11:52:21.060Z] tsc_hz: 2400000000 (cyc) 00:08:18.380 [2024-11-29T11:52:21.060Z] ====================================== 00:08:18.380 [2024-11-29T11:52:21.060Z] poller_cost: 5768 (cyc), 2403 (nsec) 00:08:18.380 00:08:18.380 real 0m1.176s 00:08:18.380 user 0m1.092s 00:08:18.380 sys 0m0.080s 00:08:18.380 12:52:20 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.380 12:52:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:18.380 ************************************ 00:08:18.380 END TEST thread_poller_perf 00:08:18.380 ************************************ 00:08:18.380 12:52:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:18.380 12:52:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:18.380 12:52:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.380 12:52:20 thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.380 ************************************ 00:08:18.380 START TEST thread_poller_perf 00:08:18.380 ************************************ 00:08:18.381 12:52:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:18.381 [2024-11-29 12:52:20.766648] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:18.381 [2024-11-29 12:52:20.766757] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699502 ] 00:08:18.381 [2024-11-29 12:52:20.863114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.381 [2024-11-29 12:52:20.895351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.381 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:19.322 [2024-11-29T11:52:22.002Z] ====================================== 00:08:19.322 [2024-11-29T11:52:22.002Z] busy:2401531422 (cyc) 00:08:19.322 [2024-11-29T11:52:22.002Z] total_run_count: 5392000 00:08:19.322 [2024-11-29T11:52:22.002Z] tsc_hz: 2400000000 (cyc) 00:08:19.322 [2024-11-29T11:52:22.002Z] ====================================== 00:08:19.322 [2024-11-29T11:52:22.002Z] poller_cost: 445 (cyc), 185 (nsec) 00:08:19.322 00:08:19.322 real 0m1.177s 00:08:19.322 user 0m1.093s 00:08:19.322 sys 0m0.081s 00:08:19.323 12:52:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.323 12:52:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.323 ************************************ 00:08:19.323 END TEST thread_poller_perf 00:08:19.323 ************************************ 00:08:19.323 12:52:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:19.323 00:08:19.323 real 0m2.713s 00:08:19.323 user 0m2.365s 00:08:19.323 sys 0m0.361s 00:08:19.323 12:52:21 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.323 12:52:21 thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.323 ************************************ 00:08:19.323 END TEST thread 00:08:19.323 ************************************ 00:08:19.323 12:52:21 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:19.323 12:52:21 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:19.323 12:52:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.323 12:52:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.323 12:52:22 -- common/autotest_common.sh@10 -- # set +x 00:08:19.584 ************************************ 00:08:19.584 START TEST app_cmdline 00:08:19.584 ************************************ 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:19.584 * Looking for test storage... 00:08:19.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.584 12:52:22 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.584 --rc genhtml_branch_coverage=1 00:08:19.584 --rc genhtml_function_coverage=1 00:08:19.584 --rc genhtml_legend=1 00:08:19.584 --rc geninfo_all_blocks=1 00:08:19.584 --rc geninfo_unexecuted_blocks=1 00:08:19.584 00:08:19.584 ' 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.584 --rc genhtml_branch_coverage=1 00:08:19.584 --rc genhtml_function_coverage=1 00:08:19.584 --rc genhtml_legend=1 00:08:19.584 --rc geninfo_all_blocks=1 00:08:19.584 --rc geninfo_unexecuted_blocks=1 00:08:19.584 00:08:19.584 ' 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.584 --rc genhtml_branch_coverage=1 00:08:19.584 --rc genhtml_function_coverage=1 00:08:19.584 --rc genhtml_legend=1 00:08:19.584 --rc geninfo_all_blocks=1 00:08:19.584 --rc geninfo_unexecuted_blocks=1 00:08:19.584 00:08:19.584 ' 00:08:19.584 12:52:22 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.584 --rc genhtml_branch_coverage=1 00:08:19.585 --rc genhtml_function_coverage=1 00:08:19.585 --rc genhtml_legend=1 00:08:19.585 --rc geninfo_all_blocks=1 00:08:19.585 --rc geninfo_unexecuted_blocks=1 00:08:19.585 00:08:19.585 ' 00:08:19.585 12:52:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:19.585 12:52:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=699797 00:08:19.585 12:52:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 699797 00:08:19.585 12:52:22 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:19.585 12:52:22 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 699797 ']' 00:08:19.585 12:52:22 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.585 12:52:22 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.585 12:52:22 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.585 12:52:22 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.585 12:52:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:19.845 [2024-11-29 12:52:22.300951] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:19.845 [2024-11-29 12:52:22.301022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699797 ] 00:08:19.845 [2024-11-29 12:52:22.389555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.845 [2024-11-29 12:52:22.429629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.784 12:52:23 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.784 12:52:23 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:20.784 12:52:23 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:20.784 { 00:08:20.784 "version": "SPDK v25.01-pre git sha1 da516d862", 00:08:20.784 "fields": { 00:08:20.784 "major": 25, 00:08:20.784 "minor": 1, 00:08:20.784 "patch": 0, 00:08:20.784 "suffix": "-pre", 00:08:20.784 "commit": "da516d862" 00:08:20.784 } 00:08:20.784 } 00:08:20.784 12:52:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:20.784 12:52:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:20.784 12:52:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:20.784 12:52:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:20.784 12:52:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:20.784 12:52:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:20.784 12:52:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:20.784 12:52:23 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.785 12:52:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:20.785 12:52:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:20.785 12:52:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:20.785 12:52:23 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.044 request: 00:08:21.044 { 00:08:21.044 "method": "env_dpdk_get_mem_stats", 00:08:21.044 "req_id": 1 00:08:21.044 } 00:08:21.044 Got JSON-RPC error response 00:08:21.044 response: 00:08:21.044 { 00:08:21.044 "code": -32601, 00:08:21.044 "message": "Method not found" 00:08:21.044 } 00:08:21.044 12:52:23 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:21.044 12:52:23 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.044 12:52:23 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.044 12:52:23 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.044 12:52:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 699797 00:08:21.044 12:52:23 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 699797 ']' 00:08:21.044 12:52:23 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 699797 00:08:21.044 12:52:23 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:21.044 12:52:23 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.045 12:52:23 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 699797 00:08:21.045 12:52:23 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.045 12:52:23 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.045 12:52:23 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 699797' 00:08:21.045 killing process with pid 699797 00:08:21.045 12:52:23 app_cmdline -- common/autotest_common.sh@973 -- # kill 699797 00:08:21.045 12:52:23 app_cmdline -- common/autotest_common.sh@978 -- # wait 699797 00:08:21.305 00:08:21.305 real 0m1.739s 00:08:21.305 user 0m2.086s 00:08:21.305 sys 0m0.480s 00:08:21.305 12:52:23 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.305 12:52:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:21.305 ************************************ 00:08:21.305 END TEST app_cmdline 00:08:21.305 ************************************ 00:08:21.305 12:52:23 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:21.305 12:52:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.305 12:52:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.305 12:52:23 -- common/autotest_common.sh@10 -- # set +x 00:08:21.305 ************************************ 00:08:21.305 START TEST version 00:08:21.305 ************************************ 00:08:21.305 12:52:23 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:21.305 * Looking for test storage... 00:08:21.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:21.305 12:52:23 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.305 12:52:23 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.305 12:52:23 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.566 12:52:24 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.566 12:52:24 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.566 12:52:24 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.566 12:52:24 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.566 12:52:24 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.566 12:52:24 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.566 12:52:24 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.566 12:52:24 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.566 12:52:24 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.566 12:52:24 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.566 12:52:24 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.566 12:52:24 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.566 12:52:24 version -- scripts/common.sh@344 -- # case "$op" in 00:08:21.566 12:52:24 version -- scripts/common.sh@345 -- # : 1 00:08:21.566 12:52:24 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.566 12:52:24 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.566 12:52:24 version -- scripts/common.sh@365 -- # decimal 1 00:08:21.566 12:52:24 version -- scripts/common.sh@353 -- # local d=1 00:08:21.566 12:52:24 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.566 12:52:24 version -- scripts/common.sh@355 -- # echo 1 00:08:21.566 12:52:24 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.566 12:52:24 version -- scripts/common.sh@366 -- # decimal 2 00:08:21.566 12:52:24 version -- scripts/common.sh@353 -- # local d=2 00:08:21.566 12:52:24 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.566 12:52:24 version -- scripts/common.sh@355 -- # echo 2 00:08:21.566 12:52:24 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.566 12:52:24 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.566 12:52:24 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.566 12:52:24 version -- scripts/common.sh@368 -- # return 0 00:08:21.566 12:52:24 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.566 12:52:24 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.566 --rc genhtml_branch_coverage=1 00:08:21.566 --rc genhtml_function_coverage=1 00:08:21.566 --rc genhtml_legend=1 00:08:21.566 --rc geninfo_all_blocks=1 00:08:21.566 --rc geninfo_unexecuted_blocks=1 00:08:21.566 00:08:21.566 ' 00:08:21.566 12:52:24 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.566 --rc genhtml_branch_coverage=1 00:08:21.566 --rc genhtml_function_coverage=1 00:08:21.566 --rc genhtml_legend=1 00:08:21.566 --rc geninfo_all_blocks=1 00:08:21.566 --rc geninfo_unexecuted_blocks=1 00:08:21.566 00:08:21.566 ' 00:08:21.566 12:52:24 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.566 --rc genhtml_branch_coverage=1 00:08:21.566 --rc genhtml_function_coverage=1 00:08:21.566 --rc genhtml_legend=1 00:08:21.566 --rc geninfo_all_blocks=1 00:08:21.566 --rc geninfo_unexecuted_blocks=1 00:08:21.566 00:08:21.566 ' 00:08:21.566 12:52:24 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.566 --rc genhtml_branch_coverage=1 00:08:21.566 --rc genhtml_function_coverage=1 00:08:21.566 --rc genhtml_legend=1 00:08:21.566 --rc geninfo_all_blocks=1 00:08:21.566 --rc geninfo_unexecuted_blocks=1 00:08:21.566 00:08:21.566 ' 00:08:21.566 12:52:24 version -- app/version.sh@17 -- # get_header_version major 00:08:21.566 12:52:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:21.566 12:52:24 version -- app/version.sh@14 -- # cut -f2 00:08:21.566 12:52:24 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.566 12:52:24 version -- app/version.sh@17 -- # major=25 00:08:21.566 12:52:24 version -- app/version.sh@18 -- # get_header_version minor 00:08:21.566 12:52:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:21.566 12:52:24 version -- app/version.sh@14 -- # cut -f2 00:08:21.566 12:52:24 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.566 12:52:24 version -- app/version.sh@18 -- # minor=1 00:08:21.566 12:52:24 version -- app/version.sh@19 -- # get_header_version patch 00:08:21.566 12:52:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:21.566 12:52:24 version -- app/version.sh@14 -- # cut -f2 00:08:21.566 12:52:24 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.566 12:52:24 version -- app/version.sh@19 -- # patch=0 00:08:21.566 12:52:24 version -- app/version.sh@20 -- # get_header_version suffix 00:08:21.566 12:52:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:21.566 12:52:24 version -- app/version.sh@14 -- # cut -f2 00:08:21.566 12:52:24 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.566 12:52:24 version -- app/version.sh@20 -- # suffix=-pre 00:08:21.566 12:52:24 version -- app/version.sh@22 -- # version=25.1 00:08:21.566 12:52:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:21.566 12:52:24 version -- app/version.sh@28 -- # version=25.1rc0 00:08:21.566 12:52:24 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:21.566 12:52:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:21.566 12:52:24 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:21.566 12:52:24 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:21.566 00:08:21.566 real 0m0.287s 00:08:21.566 user 0m0.164s 00:08:21.566 sys 0m0.172s 00:08:21.566 12:52:24 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.566 12:52:24 version -- common/autotest_common.sh@10 -- # set +x 00:08:21.566 ************************************ 00:08:21.566 END TEST version 00:08:21.566 ************************************ 00:08:21.566 12:52:24 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:21.566 12:52:24 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:21.566 12:52:24 -- spdk/autotest.sh@194 -- # uname -s 00:08:21.566 12:52:24 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:21.566 12:52:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:21.566 12:52:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:21.566 12:52:24 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:21.566 12:52:24 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:21.566 12:52:24 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:21.566 12:52:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.566 12:52:24 -- common/autotest_common.sh@10 -- # set +x 00:08:21.567 12:52:24 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:21.567 12:52:24 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:21.567 12:52:24 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:21.567 12:52:24 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:21.567 12:52:24 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:21.567 12:52:24 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:21.567 12:52:24 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:21.567 12:52:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.567 12:52:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.567 12:52:24 -- common/autotest_common.sh@10 -- # set +x 00:08:21.827 ************************************ 00:08:21.827 START TEST nvmf_tcp 00:08:21.827 ************************************ 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:21.827 * Looking for test storage... 00:08:21.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.827 12:52:24 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.827 --rc genhtml_branch_coverage=1 00:08:21.827 --rc genhtml_function_coverage=1 00:08:21.827 --rc genhtml_legend=1 00:08:21.827 --rc geninfo_all_blocks=1 00:08:21.827 --rc geninfo_unexecuted_blocks=1 00:08:21.827 00:08:21.827 ' 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.827 --rc genhtml_branch_coverage=1 00:08:21.827 --rc genhtml_function_coverage=1 00:08:21.827 --rc genhtml_legend=1 00:08:21.827 --rc geninfo_all_blocks=1 00:08:21.827 --rc geninfo_unexecuted_blocks=1 00:08:21.827 00:08:21.827 ' 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.827 --rc genhtml_branch_coverage=1 00:08:21.827 --rc genhtml_function_coverage=1 00:08:21.827 --rc genhtml_legend=1 00:08:21.827 --rc geninfo_all_blocks=1 00:08:21.827 --rc geninfo_unexecuted_blocks=1 00:08:21.827 00:08:21.827 ' 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.827 --rc genhtml_branch_coverage=1 00:08:21.827 --rc genhtml_function_coverage=1 00:08:21.827 --rc genhtml_legend=1 00:08:21.827 --rc geninfo_all_blocks=1 00:08:21.827 --rc geninfo_unexecuted_blocks=1 00:08:21.827 00:08:21.827 ' 00:08:21.827 12:52:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:21.827 12:52:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:21.827 12:52:24 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.827 12:52:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.089 ************************************ 00:08:22.089 START TEST nvmf_target_core 00:08:22.089 ************************************ 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:22.089 * Looking for test storage... 00:08:22.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.089 --rc genhtml_branch_coverage=1 00:08:22.089 --rc genhtml_function_coverage=1 00:08:22.089 --rc genhtml_legend=1 00:08:22.089 --rc geninfo_all_blocks=1 00:08:22.089 --rc geninfo_unexecuted_blocks=1 00:08:22.089 00:08:22.089 ' 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.089 --rc genhtml_branch_coverage=1 00:08:22.089 --rc genhtml_function_coverage=1 00:08:22.089 --rc genhtml_legend=1 00:08:22.089 --rc geninfo_all_blocks=1 00:08:22.089 --rc geninfo_unexecuted_blocks=1 00:08:22.089 00:08:22.089 ' 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.089 --rc genhtml_branch_coverage=1 00:08:22.089 --rc genhtml_function_coverage=1 00:08:22.089 --rc genhtml_legend=1 00:08:22.089 --rc geninfo_all_blocks=1 00:08:22.089 --rc geninfo_unexecuted_blocks=1 00:08:22.089 00:08:22.089 ' 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.089 --rc genhtml_branch_coverage=1 00:08:22.089 --rc genhtml_function_coverage=1 00:08:22.089 --rc genhtml_legend=1 00:08:22.089 --rc geninfo_all_blocks=1 00:08:22.089 --rc geninfo_unexecuted_blocks=1 00:08:22.089 00:08:22.089 ' 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.089 12:52:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.090 12:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.350 ************************************ 00:08:22.350 START TEST nvmf_abort 00:08:22.350 ************************************ 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:22.350 * Looking for test storage... 00:08:22.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:22.350 12:52:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.350 --rc genhtml_branch_coverage=1 00:08:22.350 --rc genhtml_function_coverage=1 00:08:22.350 --rc genhtml_legend=1 00:08:22.350 --rc geninfo_all_blocks=1 00:08:22.350 --rc geninfo_unexecuted_blocks=1 00:08:22.350 00:08:22.350 ' 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.350 --rc genhtml_branch_coverage=1 00:08:22.350 --rc genhtml_function_coverage=1 00:08:22.350 --rc genhtml_legend=1 00:08:22.350 --rc geninfo_all_blocks=1 00:08:22.350 --rc geninfo_unexecuted_blocks=1 00:08:22.350 00:08:22.350 ' 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.350 --rc genhtml_branch_coverage=1 00:08:22.350 --rc genhtml_function_coverage=1 00:08:22.350 --rc genhtml_legend=1 00:08:22.350 --rc geninfo_all_blocks=1 00:08:22.350 --rc geninfo_unexecuted_blocks=1 00:08:22.350 00:08:22.350 ' 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.350 --rc genhtml_branch_coverage=1 00:08:22.350 --rc genhtml_function_coverage=1 00:08:22.350 --rc genhtml_legend=1 00:08:22.350 --rc geninfo_all_blocks=1 00:08:22.350 --rc geninfo_unexecuted_blocks=1 00:08:22.350 00:08:22.350 ' 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.350 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.611 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.611 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.611 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.611 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:22.612 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:30.759 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:30.759 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.759 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:30.760 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:30.760 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:30.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:08:30.760 00:08:30.760 --- 10.0.0.2 ping statistics --- 00:08:30.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.760 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:08:30.760 00:08:30.760 --- 10.0.0.1 ping statistics --- 00:08:30.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.760 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=704250 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 704250 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 704250 ']' 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.760 12:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.760 [2024-11-29 12:52:32.621848] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:30.760 [2024-11-29 12:52:32.621914] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.760 [2024-11-29 12:52:32.726718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.760 [2024-11-29 12:52:32.780801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.760 [2024-11-29 12:52:32.780856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.760 [2024-11-29 12:52:32.780865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.760 [2024-11-29 12:52:32.780872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.760 [2024-11-29 12:52:32.780878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.760 [2024-11-29 12:52:32.782765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.760 [2024-11-29 12:52:32.782927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.760 [2024-11-29 12:52:32.782929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.022 [2024-11-29 12:52:33.501678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.022 Malloc0 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.022 Delay0 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.022 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.023 [2024-11-29 12:52:33.584530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.023 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:31.023 [2024-11-29 12:52:33.693035] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:33.568 Initializing NVMe Controllers 00:08:33.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:33.568 controller IO queue size 128 less than required 00:08:33.568 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:33.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:33.568 Initialization complete. Launching workers. 00:08:33.568 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27955 00:08:33.568 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28016, failed to submit 62 00:08:33.568 success 27959, unsuccessful 57, failed 0 00:08:33.568 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:33.568 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.569 rmmod nvme_tcp 00:08:33.569 rmmod nvme_fabrics 00:08:33.569 rmmod nvme_keyring 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 704250 ']' 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 704250 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 704250 ']' 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 704250 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 704250 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 704250' 00:08:33.569 killing process with pid 704250 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 704250 00:08:33.569 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 704250 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.569 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.484 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:35.484 00:08:35.484 real 0m13.291s 00:08:35.484 user 0m13.567s 00:08:35.484 sys 0m6.671s 00:08:35.484 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.484 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:35.484 ************************************ 00:08:35.484 END TEST nvmf_abort 00:08:35.484 ************************************ 00:08:35.484 12:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:35.484 12:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.484 12:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.484 12:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.746 ************************************ 00:08:35.746 START TEST nvmf_ns_hotplug_stress 00:08:35.746 ************************************ 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:35.746 * Looking for test storage... 00:08:35.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.746 --rc genhtml_branch_coverage=1 00:08:35.746 --rc genhtml_function_coverage=1 00:08:35.746 --rc genhtml_legend=1 00:08:35.746 --rc geninfo_all_blocks=1 00:08:35.746 --rc geninfo_unexecuted_blocks=1 00:08:35.746 00:08:35.746 ' 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.746 --rc genhtml_branch_coverage=1 00:08:35.746 --rc genhtml_function_coverage=1 00:08:35.746 --rc genhtml_legend=1 00:08:35.746 --rc geninfo_all_blocks=1 00:08:35.746 --rc geninfo_unexecuted_blocks=1 00:08:35.746 00:08:35.746 ' 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.746 --rc genhtml_branch_coverage=1 00:08:35.746 --rc genhtml_function_coverage=1 00:08:35.746 --rc genhtml_legend=1 00:08:35.746 --rc geninfo_all_blocks=1 00:08:35.746 --rc geninfo_unexecuted_blocks=1 00:08:35.746 00:08:35.746 ' 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.746 --rc genhtml_branch_coverage=1 00:08:35.746 --rc genhtml_function_coverage=1 00:08:35.746 --rc genhtml_legend=1 00:08:35.746 --rc geninfo_all_blocks=1 00:08:35.746 --rc geninfo_unexecuted_blocks=1 00:08:35.746 00:08:35.746 ' 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:35.746 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.747 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:43.901 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:43.901 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:43.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.901 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:43.902 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:08:43.902 00:08:43.902 --- 10.0.0.2 ping statistics --- 00:08:43.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.902 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:08:43.902 00:08:43.902 --- 10.0.0.1 ping statistics --- 00:08:43.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.902 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=709130 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 709130 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 709130 ']' 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.902 12:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.902 [2024-11-29 12:52:45.995563] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:08:43.902 [2024-11-29 12:52:45.995630] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.902 [2024-11-29 12:52:46.098710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.902 [2024-11-29 12:52:46.150591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.902 [2024-11-29 12:52:46.150645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.902 [2024-11-29 12:52:46.150653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.902 [2024-11-29 12:52:46.150661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.902 [2024-11-29 12:52:46.150667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.902 [2024-11-29 12:52:46.152692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.902 [2024-11-29 12:52:46.152847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.902 [2024-11-29 12:52:46.152848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.164 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.165 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:44.165 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.165 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.165 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.426 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.426 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:44.426 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:44.426 [2024-11-29 12:52:47.039137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.426 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:44.688 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.948 [2024-11-29 12:52:47.426309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.948 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.210 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:45.210 Malloc0 00:08:45.210 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:45.472 Delay0 00:08:45.472 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.734 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:45.996 NULL1 00:08:45.996 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:45.996 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=709820 00:08:45.996 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:45.996 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:45.996 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.257 Read completed with error (sct=0, sc=11) 00:08:46.257 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.517 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:46.517 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:46.517 true 00:08:46.517 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:46.517 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.462 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.723 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:47.724 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:47.724 true 00:08:47.724 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:47.724 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.985 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.246 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:48.246 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:48.246 true 00:08:48.246 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:48.246 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.509 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.771 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:48.771 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:48.771 true 00:08:49.063 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:49.063 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.633 12:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.892 12:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:49.892 12:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:50.153 true 00:08:50.153 12:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:50.153 12:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.413 12:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.413 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:50.413 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:50.672 true 00:08:50.672 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:50.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.944 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.944 [2024-11-29 12:52:53.544554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.544978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.545988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.546975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.547007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.547038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.547068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.547096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:50.944 [2024-11-29 12:52:53.547128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.547171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.547198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.944 [2024-11-29 12:52:53.547223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.547993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.548988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.549981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.945 [2024-11-29 12:52:53.550731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.550757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.550784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.550814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.550849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.550880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.550916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.550944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.550974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.551986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.552997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.553979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.554983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.555011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.555043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.555071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.946 [2024-11-29 12:52:53.555106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.555979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.556974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.557997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.558996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.559994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.560021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.560049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.560083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.947 [2024-11-29 12:52:53.560116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.560972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.561980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.562974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.563966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.564805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.948 [2024-11-29 12:52:53.565823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.565858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.565889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.565921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.565953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.565990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.566973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.567944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.568970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.569974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.570988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.949 [2024-11-29 12:52:53.571364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.571967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.572974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.573963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.574988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.575954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:50.950 [2024-11-29 12:52:53.575981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:50.950 [2024-11-29 12:52:53.576424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.576982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.577018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.577050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.577281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.950 [2024-11-29 12:52:53.577310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.577992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.578717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.579995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.951 [2024-11-29 12:52:53.580749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.580778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.580807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.580837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.580869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.580899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.580928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.580957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.580986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.581707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.582991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.583973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.584002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.584040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.584072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.584101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.584245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.952 [2024-11-29 12:52:53.584274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:50.953 [2024-11-29 12:52:53.584852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.584973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.585991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.586972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.953 [2024-11-29 12:52:53.587694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.587725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.587761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.587792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.587823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.587853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.587882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.587913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.587944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.587974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.588998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.589996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.954 [2024-11-29 12:52:53.590977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.591984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.592992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.593935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.955 [2024-11-29 12:52:53.594736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.594765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.594808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.594840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.594873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.594904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.594961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.594989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.595977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.596975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.597975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.598004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.956 [2024-11-29 12:52:53.598032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.598661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.599991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.600909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.957 [2024-11-29 12:52:53.601737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.601769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.601800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.601830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.601860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.601890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.601921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.601955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.601986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.602988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.603994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.604992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.958 [2024-11-29 12:52:53.605298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.605980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.606996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.607998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.959 [2024-11-29 12:52:53.608567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.608596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.608633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.608661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.608695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.608723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.608752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.608781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.608985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.609979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.610918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:50.960 [2024-11-29 12:52:53.611713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.248 [2024-11-29 12:52:53.611744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.248 [2024-11-29 12:52:53.611776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.248 [2024-11-29 12:52:53.611804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.248 [2024-11-29 12:52:53.611831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.248 [2024-11-29 12:52:53.611862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.248 [2024-11-29 12:52:53.611891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.611920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.611951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.611981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.612979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.613972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.614988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.249 [2024-11-29 12:52:53.615357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.615938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.616972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.617974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.618003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.618032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.250 [2024-11-29 12:52:53.618060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.618814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.251 [2024-11-29 12:52:53.619474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.619997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.620946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.621530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.251 [2024-11-29 12:52:53.621562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.621996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.622989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.623984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.252 [2024-11-29 12:52:53.624684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.624720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.624747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.624777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.624831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.624869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.624904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.624943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.624971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.625976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.626972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.253 [2024-11-29 12:52:53.627304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.627985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.628476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.629984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.254 [2024-11-29 12:52:53.630587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.630993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.631985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.632975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.633983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.634015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.634044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.634087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.634119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.255 [2024-11-29 12:52:53.634165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.634990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.635977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.636991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.256 [2024-11-29 12:52:53.637762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.637793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.637822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.637851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.637879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.637907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.637938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.637969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.637999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.638979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.639970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.640982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.257 [2024-11-29 12:52:53.641354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.641969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.642991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.643973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.258 [2024-11-29 12:52:53.644717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.644741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.644771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.644798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.644827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.644857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.644886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.644916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.644949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.644977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.645984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.646992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.647995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.648025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.648055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.648093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.648120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.648150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.648187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.648215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.259 [2024-11-29 12:52:53.648247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.648968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.649991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.650390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.651978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.652014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.652043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.652073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.652111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.652150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.652184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.652214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.652246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.260 [2024-11-29 12:52:53.652278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.652988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.653983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.654764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.261 [2024-11-29 12:52:53.655786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.655813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.655842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.655874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.655905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.655936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.655970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.655999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.656994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.657661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.262 [2024-11-29 12:52:53.658099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.658976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.262 [2024-11-29 12:52:53.659420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.659987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.660974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.661974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.263 [2024-11-29 12:52:53.662844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.662873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.662900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.662931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.662966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.662997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.663978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.664978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.264 [2024-11-29 12:52:53.665986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.666989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.667972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.668983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.265 [2024-11-29 12:52:53.669938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.669968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.669997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.670985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.671987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.672979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.673010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.266 [2024-11-29 12:52:53.673046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.673991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.674988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.675995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.676023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.676053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.676082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.676113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.676141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.676618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.267 [2024-11-29 12:52:53.676660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.676989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.677971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.678990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.268 [2024-11-29 12:52:53.679807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.679839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.679870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.679924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.679956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.679985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.680974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.681992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.682895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.269 [2024-11-29 12:52:53.683729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.683759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.683789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.683822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.683852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.683884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.683913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.683972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.684983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.685994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.686977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.687008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.687040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.687074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.270 [2024-11-29 12:52:53.687103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.687994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.688959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.689988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.271 [2024-11-29 12:52:53.690957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.690987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.691980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.692995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.693987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.694011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.272 [2024-11-29 12:52:53.694042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.273 [2024-11-29 12:52:53.694821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.694976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.695981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.696991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.273 [2024-11-29 12:52:53.697873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.697903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.697932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.697965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.698993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.699777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.700968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.274 [2024-11-29 12:52:53.701801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.701832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.701861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.701893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.701923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.701952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.701981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.702973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.703982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.704980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.705996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.275 [2024-11-29 12:52:53.706696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.706729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.706758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.706792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.706825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.706856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.706889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.706920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.706950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.706978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.707992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.708976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.709815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.710424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.710458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.710488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.710520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.276 [2024-11-29 12:52:53.710550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.710977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.711982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.712998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.713973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.277 [2024-11-29 12:52:53.714597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.714993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.715975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.716980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.717959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.278 [2024-11-29 12:52:53.718750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.718783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.718816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.718855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.718888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.718918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.719984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.720971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.721775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.722996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.723030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.723062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.723094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.723129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.279 [2024-11-29 12:52:53.723166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.723994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.724982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.725948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.726436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.727019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.727051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.280 [2024-11-29 12:52:53.727087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.727987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.728977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.729993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.281 [2024-11-29 12:52:53.730953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.730987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.731798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 Message suppressed 999 times: [2024-11-29 12:52:53.732297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 Read completed with error (sct=0, sc=15) 00:08:51.282 [2024-11-29 12:52:53.732329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.732990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.733969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.734984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.735011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.735045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.735075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.735101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.282 [2024-11-29 12:52:53.735134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.735975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.736466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.737979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 true 00:08:51.283 [2024-11-29 12:52:53.738321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.738982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.739014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.283 [2024-11-29 12:52:53.739046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.739985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.740990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.741997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.284 [2024-11-29 12:52:53.742781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.742813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.742848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.742877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.742905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.742931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.742963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.742992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.743386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.744993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.745964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.746995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.747024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.747054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.747089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.747119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.747152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.285 [2024-11-29 12:52:53.747185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.747978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.748010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.748036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.748064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.748099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.748135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.748174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.749969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.750971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.286 [2024-11-29 12:52:53.751690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.751722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.751751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.751782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.751809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.751839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.751867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.751911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.751944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.751980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.752975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.753975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.754975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.287 [2024-11-29 12:52:53.755291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.755975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.756996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.757730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.758983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.288 [2024-11-29 12:52:53.759475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.759991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.760982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.761988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.762986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.289 [2024-11-29 12:52:53.763638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.763986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:51.290 [2024-11-29 12:52:53.764140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.290 [2024-11-29 12:52:53.764600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.764692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.765975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.290 [2024-11-29 12:52:53.766547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.766954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.291 [2024-11-29 12:52:53.767641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.767986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.768987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.291 [2024-11-29 12:52:53.769961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.769989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.770998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.771989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.772986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.773015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.773045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.773075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.773103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.773136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.292 [2024-11-29 12:52:53.773168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.773915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.774981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.775990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.776972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.293 [2024-11-29 12:52:53.777363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.777977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.778997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.779972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.294 [2024-11-29 12:52:53.780862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.780892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.780923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.780964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.780994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.781989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.782996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.783803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.784162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.784194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.784223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.784260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.784295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.784329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.784359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.784392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.295 [2024-11-29 12:52:53.784422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.784987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.785977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.786983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.296 [2024-11-29 12:52:53.787666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.787985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.788990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.789972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.790989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.791019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.791049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.791080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.791115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.297 [2024-11-29 12:52:53.791151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.791972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.792989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.793627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.298 [2024-11-29 12:52:53.794839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.794867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.794907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.794937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.794975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.795980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.796998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.797996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.798027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.299 [2024-11-29 12:52:53.798053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.798387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.799991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.800987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.801968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.300 [2024-11-29 12:52:53.802005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.802963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.803000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.803040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.803072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.803101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.803132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.803156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.803189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.803217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.803246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.804988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.301 [2024-11-29 12:52:53.805475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.805980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.302 [2024-11-29 12:52:53.806211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.806999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.807982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.808970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.809005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.809032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.809064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.809093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.809124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.809156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.809189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.302 [2024-11-29 12:52:53.809219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.809984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.810989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.811986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.303 [2024-11-29 12:52:53.812368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.812704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.813980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.814981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.815995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.816025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.816055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.816085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.816115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.816146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.816178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.304 [2024-11-29 12:52:53.816209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.816986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.817998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.818975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.819009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.819039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.819072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.819101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.819129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.819161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.819195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.305 [2024-11-29 12:52:53.819223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.819769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.820995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.821974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.822985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.306 [2024-11-29 12:52:53.823346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.823987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.824842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.825960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.307 [2024-11-29 12:52:53.826784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.826815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.826843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.826870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.826915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.826943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.826979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.827992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.828989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.829963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.308 [2024-11-29 12:52:53.830557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.830975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.831862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.832995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.833993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.309 [2024-11-29 12:52:53.834360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.834988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.835973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.836696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.310 [2024-11-29 12:52:53.837604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.837984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.838984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.839999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.840978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.841006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.841041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.841068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.841106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.841135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.311 [2024-11-29 12:52:53.841167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.841967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.312 [2024-11-29 12:52:53.842056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.842964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.843583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.312 [2024-11-29 12:52:53.844883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.844914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.844943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.844979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.845994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.846998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.847993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.848022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.313 [2024-11-29 12:52:53.848051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.848977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.849997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.850981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.851010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.851046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.851075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.851606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.314 [2024-11-29 12:52:53.851644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.851983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.852999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.853978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.315 [2024-11-29 12:52:53.854817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.854849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.854883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.854913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.854941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.854970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.855646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.856988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.857963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.858539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.858576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.858605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.858633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.858662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.858693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.316 [2024-11-29 12:52:53.858723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.858752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.858784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.858813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.858843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.858873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.858901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.858931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.858965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.859972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.860989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.861997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.317 [2024-11-29 12:52:53.862314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.862945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.863994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.864969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.318 [2024-11-29 12:52:53.865579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.865610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.865641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.865673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.865705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.865736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.866991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.867999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.868994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.319 [2024-11-29 12:52:53.869302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.869980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.870981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.871989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.320 [2024-11-29 12:52:53.872842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.872875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.872906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.872935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.872965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.872995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.873992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.874995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.875973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.321 [2024-11-29 12:52:53.876436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.876986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.877998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.878995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.322 [2024-11-29 12:52:53.879148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.879992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.880019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.880047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.880075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.880107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.880138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.322 [2024-11-29 12:52:53.880174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.880992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.881999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.882969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.323 [2024-11-29 12:52:53.883380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.883990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.884998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.885996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.886998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.887035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.887063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.324 [2024-11-29 12:52:53.887092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.887922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.888966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.889979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.325 [2024-11-29 12:52:53.890672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.890988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.891980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.892995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.893974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.326 [2024-11-29 12:52:53.894482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.894952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.895980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.896970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.897998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.898028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.898065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.898093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.898133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.898169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.898199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.898227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.898256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.327 [2024-11-29 12:52:53.898291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.898984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.328 [2024-11-29 12:52:53.899581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.605 [2024-11-29 12:52:53.900872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.900902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.900934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.900964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.900994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.901975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.902984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.903992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.904024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.904051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.904082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.904124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.904156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.904192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.904222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.606 [2024-11-29 12:52:53.904252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.904975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.905977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.906821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.607 [2024-11-29 12:52:53.907862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.907898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.907927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.907960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.907991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.908987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.909965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.910767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.911175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.911215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.911247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.911276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.911303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.608 [2024-11-29 12:52:53.911330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.911983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.912975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.913910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.914976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.915006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.915036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.915068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.915099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.915128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.609 [2024-11-29 12:52:53.915164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.915974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.610 [2024-11-29 12:52:53.916936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.916995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.610 [2024-11-29 12:52:53.917995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:53.918823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.611 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.611 [2024-11-29 12:52:54.108840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.108886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.108917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.108947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.108978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.109972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.110893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.111521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.111554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.111585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.611 [2024-11-29 12:52:54.111615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.111968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.112999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.113974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.612 [2024-11-29 12:52:54.114799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.114836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.114864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.114898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.114928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.114959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.114988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.115996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.116990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.117874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.613 [2024-11-29 12:52:54.118295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.118322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.118350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.118385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.118415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.118448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.118479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.614 [2024-11-29 12:52:54.118987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.119968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.120948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.614 [2024-11-29 12:52:54.121567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.121986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.122494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.123993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.124972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.125000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.125033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.125060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.125101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.125135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.125176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.125206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.125240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.615 [2024-11-29 12:52:54.125272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.125856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.126995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.127999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.616 [2024-11-29 12:52:54.128535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.128991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.129984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.130999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.617 [2024-11-29 12:52:54.131994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.132983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.133967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.134731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.135090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.135122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.135152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.618 [2024-11-29 12:52:54.135188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.135965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.136968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.137985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.138017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.138044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.138079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.138108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.138136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.138170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.138202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.138230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.619 [2024-11-29 12:52:54.138257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 12:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:51.620 [2024-11-29 12:52:54.138848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.138966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 12:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:51.620 [2024-11-29 12:52:54.139246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.139437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.140999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.620 [2024-11-29 12:52:54.141372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.141971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.142990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.143996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.144032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.144066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.144094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.144124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.144873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.144904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.144933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.144964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.144991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.145018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.145050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.621 [2024-11-29 12:52:54.145080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.145983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.146918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.147972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.148003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.148033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.148093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.148125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.148156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.148192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.148232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.148267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.622 [2024-11-29 12:52:54.148293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.148971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.149907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.150986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.623 [2024-11-29 12:52:54.151849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.151880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.151918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.151954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.151983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.152897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.153997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.154960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.624 [2024-11-29 12:52:54.154994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.155025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.155057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.624 [2024-11-29 12:52:54.155085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.155973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.156903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.157983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.625 [2024-11-29 12:52:54.158740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.158772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.158809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.158852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.158889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.158928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.158958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.158989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.159933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.160966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.626 [2024-11-29 12:52:54.161879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.161910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.161937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.161965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.162996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.163991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.164989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.627 [2024-11-29 12:52:54.165688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.165723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.165760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.165788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.165817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.165847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.165879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.165912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.165945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.165976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.166715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.167995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.628 [2024-11-29 12:52:54.168919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.168949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.168977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.169989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.170982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.171978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.629 [2024-11-29 12:52:54.172640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.172988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.173975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.174977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.630 [2024-11-29 12:52:54.175978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.176739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.177972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.178954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.631 [2024-11-29 12:52:54.179921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.179950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.179981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.180985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.181991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.182991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.632 [2024-11-29 12:52:54.183358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.183998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.184971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.185972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.633 [2024-11-29 12:52:54.186287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.186699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.187976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.188995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.189976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.190006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.634 [2024-11-29 12:52:54.190038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.190999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.191992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.635 [2024-11-29 12:52:54.192595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.192995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.635 [2024-11-29 12:52:54.193531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.193850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.194999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.195981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.196988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.197017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.197308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.197354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.197383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.197415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.197443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.197473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.197501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.636 [2024-11-29 12:52:54.197531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.197980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.198797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.199996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.637 [2024-11-29 12:52:54.200819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.200848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.200877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.200903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.200935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.200963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.200992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.201613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.202987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.203961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.638 [2024-11-29 12:52:54.204511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.204981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.205991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.206987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.639 [2024-11-29 12:52:54.207875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.207904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.207933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.207984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.208777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.209988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.210995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.211040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.211070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.211105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.211460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.211493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.211523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.211551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.211583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.640 [2024-11-29 12:52:54.211610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.211995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.212983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.213996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.641 [2024-11-29 12:52:54.214971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.215739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.216997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.642 [2024-11-29 12:52:54.217893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.217923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.217953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.217981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.218981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.219985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.220987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.643 [2024-11-29 12:52:54.221766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.221795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.221822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.221855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.221886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.221915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.221942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.221966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.221998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.222982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.223993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.224992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.225024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.225055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.225085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.225118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.225148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.225183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.644 [2024-11-29 12:52:54.225213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.225811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.226980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.227987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.228975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.229005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.229035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.229064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.229092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.229132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.229166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.645 [2024-11-29 12:52:54.229195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.229999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.646 [2024-11-29 12:52:54.230800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.230984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.231976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.646 [2024-11-29 12:52:54.232528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.232559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.232588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.232617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.232651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.232679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.232704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.232733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.233986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.234969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.647 [2024-11-29 12:52:54.235793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.235826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.235860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.235888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.235920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.235948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.235979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.236950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.237664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.238970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.648 [2024-11-29 12:52:54.239587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.239982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.240999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.241987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.242977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.243994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.244027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.649 [2024-11-29 12:52:54.244058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.244948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.245999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.246028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.246059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.246088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.246118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.246149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.246185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.650 [2024-11-29 12:52:54.246214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.246993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.247983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.651 [2024-11-29 12:52:54.248672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.248983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.249348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.250995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.251026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.251054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.652 [2024-11-29 12:52:54.251091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.251991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.252969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.653 [2024-11-29 12:52:54.253467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.253975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.254989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.255982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.256014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.256047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.256078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.256107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.256134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.256171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.654 [2024-11-29 12:52:54.256208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.256997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.257986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.655 [2024-11-29 12:52:54.258878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.258909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.259970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.260989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.656 [2024-11-29 12:52:54.261913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.261942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.261971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.262972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.263975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.657 [2024-11-29 12:52:54.264739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.264771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.264801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.264829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.264859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.264889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.264922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.264951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.264982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.265975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.658 [2024-11-29 12:52:54.266683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.659 [2024-11-29 12:52:54.267248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.659 [2024-11-29 12:52:54.267857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.948 [2024-11-29 12:52:54.267921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.948 [2024-11-29 12:52:54.267955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.948 [2024-11-29 12:52:54.267986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.948 [2024-11-29 12:52:54.268014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.948 [2024-11-29 12:52:54.268044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.948 [2024-11-29 12:52:54.268075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.268983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.269985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.949 [2024-11-29 12:52:54.270568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.270980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.271968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.272965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.273734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.274093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.950 [2024-11-29 12:52:54.274125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.274976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.275976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.276978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.277011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.277042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.277069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.277103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.277140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.277178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.277207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.951 [2024-11-29 12:52:54.277240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.277981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.278984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.279977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.280006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.280036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.280102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.280134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.280172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.280200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.280230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.952 [2024-11-29 12:52:54.280259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.280885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.281980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.282996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.953 [2024-11-29 12:52:54.283742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.283772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.283803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.283828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.283857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.283888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.283919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.283948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.283982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.284979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.285982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.954 [2024-11-29 12:52:54.286996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.287990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.288982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.289969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.955 [2024-11-29 12:52:54.290441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.290992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.291994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.292980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.293970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.294003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.294038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.294068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.294101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.294129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.294163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.956 [2024-11-29 12:52:54.294194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.294964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.295679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.296979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.957 [2024-11-29 12:52:54.297574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.297933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.298988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 true 00:08:51.958 [2024-11-29 12:52:54.299729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.299998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.958 [2024-11-29 12:52:54.300944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.300982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.301989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.302981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.303983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.959 [2024-11-29 12:52:54.304321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.304932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.960 [2024-11-29 12:52:54.305292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.305998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.306974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.307988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.308020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.960 [2024-11-29 12:52:54.308059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.308986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.309640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.310989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.961 [2024-11-29 12:52:54.311691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.311999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.312975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.313987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.314957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.315008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.962 [2024-11-29 12:52:54.315045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.315985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.316996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.317031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.317059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.317095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.317127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.963 [2024-11-29 12:52:54.317169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.317995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.318976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.319980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.964 [2024-11-29 12:52:54.320759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.320792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.320821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.320854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.320883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.320914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.320944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.320983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.321990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.322982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.323982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.324007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.324036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.324068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.324117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.324147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.324179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.965 [2024-11-29 12:52:54.324207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.324968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 12:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:51.966 [2024-11-29 12:52:54.325832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.325991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 12:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.966 [2024-11-29 12:52:54.326226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.326998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.966 [2024-11-29 12:52:54.327670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.327980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.328981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.329992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.330968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.331135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.331172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.331200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.331230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.331262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.331297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.331330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.967 [2024-11-29 12:52:54.331358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.331998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.332996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.333995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.968 [2024-11-29 12:52:54.334507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.334979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.335966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.336970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.337978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.338010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.338038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.338067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.338097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.969 [2024-11-29 12:52:54.338127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.338997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.339990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.970 [2024-11-29 12:52:54.340773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.340977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.970 [2024-11-29 12:52:54.341626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.341991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.342969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.343986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.344980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.345011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.345041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.971 [2024-11-29 12:52:54.345071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.345994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.346995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.347753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.348092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.348122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.348154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.348195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.348224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.972 [2024-11-29 12:52:54.348254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.348985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.349991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.350984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.973 [2024-11-29 12:52:54.351919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.351951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.351979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.352997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.353998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.354930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.355293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.355324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.355350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.355385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.355413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.974 [2024-11-29 12:52:54.355444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.355987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.356984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.357976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.975 [2024-11-29 12:52:54.358826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.358855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.358891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.358920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.358949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.358976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.359991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.360996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.361930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.362306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.362344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.362370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.976 [2024-11-29 12:52:54.362397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.362988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.363994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.364984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.977 [2024-11-29 12:52:54.365394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.365981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.366999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.367981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.978 [2024-11-29 12:52:54.368773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.368803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.368832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.368866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.368898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.368927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.369976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.370997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.371998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.979 [2024-11-29 12:52:54.372766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.372792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.372824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.372855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.372886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.372915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.372941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.372978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.373982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.374971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.980 [2024-11-29 12:52:54.375893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.375920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.375951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.375979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.376977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.377992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 [2024-11-29 12:52:54.378842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.981 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:51.982 [2024-11-29 12:52:54.379211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.379970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.380976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.381983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.982 [2024-11-29 12:52:54.382622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.382972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.383992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.384984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.385938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.386620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.983 [2024-11-29 12:52:54.386653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.386996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.387994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.388980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.984 [2024-11-29 12:52:54.389759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.389788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.389821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.389851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.389885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.389914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.389946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.389977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.390768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.391981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.392979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.393009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.393043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.393072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.985 [2024-11-29 12:52:54.393116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.393998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.394975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.395975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.986 [2024-11-29 12:52:54.396549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.396979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.397770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.398988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.987 [2024-11-29 12:52:54.399975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.400999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.401989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.402990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.988 [2024-11-29 12:52:54.403881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.403908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.403950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.403978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.404964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.405987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.406990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.989 [2024-11-29 12:52:54.407549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.407986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.408980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.409851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.410999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.990 [2024-11-29 12:52:54.411339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.411979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.412969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.413975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.414024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.414052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.414088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.991 [2024-11-29 12:52:54.414119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.414642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 Message suppressed 999 times: [2024-11-29 12:52:54.415267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 Read completed with error (sct=0, sc=15) 00:08:51.992 [2024-11-29 12:52:54.415306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.415992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.416993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.992 [2024-11-29 12:52:54.417926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.417954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.417985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.418995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.419997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.420983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.421017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.421046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.421077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.421109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.421140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.421171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.421206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.993 [2024-11-29 12:52:54.421234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.421699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.422985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.994 [2024-11-29 12:52:54.423921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.423955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.423981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.424970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.425997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.426987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.427023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.427053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.427081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.427109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.427135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.427172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.427201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.995 [2024-11-29 12:52:54.427232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.427979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.428979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.996 [2024-11-29 12:52:54.429500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.429970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.430873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.431983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.997 [2024-11-29 12:52:54.432916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.432946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.432982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.433973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.434971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.998 [2024-11-29 12:52:54.435568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.435977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.436994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.437975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:51.999 [2024-11-29 12:52:54.438531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.438558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.438905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.438945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.438983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.439983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.440851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.441266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.441299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.441330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.441362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.441392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.441421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.441450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.000 [2024-11-29 12:52:54.441483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.441975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.442968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.443971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.001 [2024-11-29 12:52:54.444645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.444996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.445894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.002 [2024-11-29 12:52:54.446044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:52.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.942 12:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.204 12:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:53.204 12:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:53.204 true 00:08:53.204 12:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:53.204 12:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.145 12:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.405 12:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:54.406 12:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:54.406 true 00:08:54.406 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:54.406 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.666 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.032 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:55.032 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:55.032 true 00:08:55.032 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:55.032 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.343 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.343 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:55.343 12:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:55.605 true 00:08:55.605 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:55.605 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.866 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.866 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:55.866 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:56.126 true 00:08:56.126 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:56.126 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.509 12:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.509 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:57.509 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:57.509 true 00:08:57.769 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:57.769 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.711 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.711 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:58.711 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:58.711 true 00:08:58.711 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:58.711 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.971 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.232 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:59.232 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:59.232 true 00:08:59.492 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:08:59.492 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.433 12:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.693 12:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:00.693 12:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:00.954 true 00:09:00.954 12:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:00.954 12:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.894 12:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.894 12:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:01.894 12:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:02.154 true 00:09:02.154 12:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:02.154 12:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.154 12:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.415 12:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:02.415 12:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:02.675 true 00:09:02.675 12:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:02.675 12:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.057 12:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.057 12:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:04.057 12:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:04.057 true 00:09:04.057 12:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:04.057 12:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.997 12:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.258 12:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:05.258 12:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:05.258 true 00:09:05.258 12:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:05.258 12:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.518 12:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.779 12:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:05.779 12:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:05.779 true 00:09:05.779 12:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:05.779 12:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.039 12:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.300 12:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:06.300 12:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:06.300 true 00:09:06.561 12:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:06.561 12:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.561 12:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.820 12:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:06.820 12:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:07.078 true 00:09:07.078 12:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:07.079 12:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.018 12:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.279 12:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:08.279 12:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:08.540 true 00:09:08.540 12:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:08.540 12:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.483 12:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.484 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:09.484 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:09.744 true 00:09:09.744 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:09.744 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.005 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.005 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:10.005 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:10.265 true 00:09:10.265 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:10.266 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.470 12:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.470 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:11.470 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:11.731 true 00:09:11.731 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:11.731 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.672 12:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.672 12:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:12.672 12:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:12.932 true 00:09:12.932 12:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:12.932 12:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.192 12:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.192 12:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:13.192 12:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:13.453 true 00:09:13.453 12:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:13.453 12:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.836 12:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.837 12:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:14.837 12:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:14.837 true 00:09:14.837 12:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:14.837 12:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.776 12:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.037 12:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:16.037 12:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:16.037 true 00:09:16.298 12:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:16.298 12:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.298 12:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.298 Initializing NVMe Controllers 00:09:16.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.298 Controller IO queue size 128, less than required. 00:09:16.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:16.298 Controller IO queue size 128, less than required. 00:09:16.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:16.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:16.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:16.298 Initialization complete. Launching workers. 00:09:16.298 ======================================================== 00:09:16.298 Latency(us) 00:09:16.298 Device Information : IOPS MiB/s Average min max 00:09:16.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3345.11 1.63 24915.36 1431.60 1009671.33 00:09:16.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18826.84 9.19 6799.14 1198.48 405249.94 00:09:16.298 ======================================================== 00:09:16.298 Total : 22171.95 10.83 9532.36 1198.48 1009671.33 00:09:16.298 00:09:16.560 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:16.560 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:16.560 true 00:09:16.821 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709820 00:09:16.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (709820) - No such process 00:09:16.821 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 709820 00:09:16.821 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.821 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.082 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:17.082 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:17.082 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:17.082 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.082 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:17.344 null0 00:09:17.344 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.344 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.344 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:17.344 null1 00:09:17.344 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.344 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.344 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:17.606 null2 00:09:17.606 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.606 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.606 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:17.866 null3 00:09:17.866 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.866 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.866 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:17.866 null4 00:09:17.866 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:17.867 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:17.867 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:18.128 null5 00:09:18.128 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.128 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.128 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:18.389 null6 00:09:18.389 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.389 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.389 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:18.389 null7 00:09:18.389 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:18.389 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:18.389 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:18.389 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.650 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.650 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:18.650 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.650 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:18.650 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 716313 716314 716316 716318 716320 716322 716324 716325 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.651 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.913 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.175 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.176 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.437 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.437 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.437 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.437 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.437 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.437 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.437 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.437 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.437 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.437 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.437 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.437 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.437 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.437 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.699 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.960 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.220 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.480 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.480 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.480 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.480 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.480 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.480 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.480 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.742 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.003 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.264 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.525 12:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.525 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.526 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.526 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.526 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.526 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.526 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.787 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.047 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:22.047 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.047 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.048 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:22.308 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:22.309 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.309 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:22.309 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.309 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.309 rmmod nvme_tcp 00:09:22.309 rmmod nvme_fabrics 00:09:22.309 rmmod nvme_keyring 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 709130 ']' 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 709130 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 709130 ']' 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 709130 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 709130 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 709130' 00:09:22.568 killing process with pid 709130 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 709130 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 709130 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.568 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.119 00:09:25.119 real 0m49.099s 00:09:25.119 user 3m12.464s 00:09:25.119 sys 0m16.822s 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.119 ************************************ 00:09:25.119 END TEST nvmf_ns_hotplug_stress 00:09:25.119 ************************************ 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.119 ************************************ 00:09:25.119 START TEST nvmf_delete_subsystem 00:09:25.119 ************************************ 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:25.119 * Looking for test storage... 00:09:25.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.119 --rc genhtml_branch_coverage=1 00:09:25.119 --rc genhtml_function_coverage=1 00:09:25.119 --rc genhtml_legend=1 00:09:25.119 --rc geninfo_all_blocks=1 00:09:25.119 --rc geninfo_unexecuted_blocks=1 00:09:25.119 00:09:25.119 ' 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.119 --rc genhtml_branch_coverage=1 00:09:25.119 --rc genhtml_function_coverage=1 00:09:25.119 --rc genhtml_legend=1 00:09:25.119 --rc geninfo_all_blocks=1 00:09:25.119 --rc geninfo_unexecuted_blocks=1 00:09:25.119 00:09:25.119 ' 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.119 --rc genhtml_branch_coverage=1 00:09:25.119 --rc genhtml_function_coverage=1 00:09:25.119 --rc genhtml_legend=1 00:09:25.119 --rc geninfo_all_blocks=1 00:09:25.119 --rc geninfo_unexecuted_blocks=1 00:09:25.119 00:09:25.119 ' 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.119 --rc genhtml_branch_coverage=1 00:09:25.119 --rc genhtml_function_coverage=1 00:09:25.119 --rc genhtml_legend=1 00:09:25.119 --rc geninfo_all_blocks=1 00:09:25.119 --rc geninfo_unexecuted_blocks=1 00:09:25.119 00:09:25.119 ' 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.119 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.120 12:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:33.267 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:33.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:33.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:33.268 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:33.268 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:09:33.268 00:09:33.268 --- 10.0.0.2 ping statistics --- 00:09:33.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.268 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:09:33.268 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:09:33.268 00:09:33.268 --- 10.0.0.1 ping statistics --- 00:09:33.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.268 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:09:33.268 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.268 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:33.268 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.268 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.268 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.268 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=721492 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 721492 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 721492 ']' 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.269 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.269 [2024-11-29 12:53:35.120915] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:09:33.269 [2024-11-29 12:53:35.120979] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.269 [2024-11-29 12:53:35.219909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.269 [2024-11-29 12:53:35.271571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.269 [2024-11-29 12:53:35.271625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.269 [2024-11-29 12:53:35.271634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.269 [2024-11-29 12:53:35.271640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.269 [2024-11-29 12:53:35.271647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.269 [2024-11-29 12:53:35.273269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.269 [2024-11-29 12:53:35.273440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.530 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.531 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:09:33.531 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.531 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.531 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.531 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.531 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.531 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.531 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.531 [2024-11-29 12:53:36.007183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.531 [2024-11-29 12:53:36.031516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.531 NULL1 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.531 Delay0 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=721844 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:33.531 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:33.531 [2024-11-29 12:53:36.158562] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:35.446 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.446 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.446 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 starting I/O failed: -6 00:09:35.708 [2024-11-29 12:53:38.324335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d2c0 is same with the state(6) to be set 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Write completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.708 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 [2024-11-29 12:53:38.325711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d680 is same with the state(6) to be set 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 starting I/O failed: -6 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 [2024-11-29 12:53:38.329447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7b6400d490 is same with the state(6) to be set 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Write completed with error (sct=0, sc=8) 00:09:35.709 Read completed with error (sct=0, sc=8) 00:09:35.709 [2024-11-29 12:53:38.329783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7b64000c40 is same with the state(6) to be set 00:09:36.651 [2024-11-29 12:53:39.298654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e9b0 is same with the state(6) to be set 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 [2024-11-29 12:53:39.327593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d4a0 is same with the state(6) to be set 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Read completed with error (sct=0, sc=8) 00:09:36.651 Write completed with error (sct=0, sc=8) 00:09:36.651 [2024-11-29 12:53:39.328064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d860 is same with the state(6) to be set 00:09:36.911 Write completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Write completed with error (sct=0, sc=8) 00:09:36.911 Write completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Write completed with error (sct=0, sc=8) 00:09:36.911 Write completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 [2024-11-29 12:53:39.331893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7b6400d020 is same with the state(6) to be set 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.911 Read completed with error (sct=0, sc=8) 00:09:36.912 Write completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Write completed with error (sct=0, sc=8) 00:09:36.912 Write completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Write completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Write completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Write completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Write completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Write completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 Read completed with error (sct=0, sc=8) 00:09:36.912 [2024-11-29 12:53:39.332006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7b6400d7c0 is same with the state(6) to be set 00:09:36.912 Initializing NVMe Controllers 00:09:36.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:36.912 Controller IO queue size 128, less than required. 00:09:36.912 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:36.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:36.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:36.912 Initialization complete. Launching workers. 00:09:36.912 ======================================================== 00:09:36.912 Latency(us) 00:09:36.912 Device Information : IOPS MiB/s Average min max 00:09:36.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.31 0.08 916745.18 1214.90 1007522.74 00:09:36.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.31 0.08 949159.62 363.88 2002633.55 00:09:36.912 ======================================================== 00:09:36.912 Total : 320.62 0.16 932952.40 363.88 2002633.55 00:09:36.912 00:09:36.912 [2024-11-29 12:53:39.332621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83e9b0 (9): Bad file descriptor 00:09:36.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:36.912 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.912 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:36.912 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 721844 00:09:36.912 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:37.172 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:37.172 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 721844 00:09:37.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (721844) - No such process 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 721844 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 721844 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 721844 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.173 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.434 [2024-11-29 12:53:39.864962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=722526 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 722526 00:09:37.434 12:53:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:37.434 [2024-11-29 12:53:39.970292] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:38.006 12:53:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.006 12:53:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 722526 00:09:38.006 12:53:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.267 12:53:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.267 12:53:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 722526 00:09:38.267 12:53:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.837 12:53:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.837 12:53:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 722526 00:09:38.837 12:53:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:39.408 12:53:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:39.408 12:53:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 722526 00:09:39.408 12:53:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:39.981 12:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:39.981 12:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 722526 00:09:39.981 12:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:40.242 12:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:40.242 12:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 722526 00:09:40.242 12:53:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:40.813 Initializing NVMe Controllers 00:09:40.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:40.813 Controller IO queue size 128, less than required. 00:09:40.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:40.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:40.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:40.813 Initialization complete. Launching workers. 00:09:40.813 ======================================================== 00:09:40.813 Latency(us) 00:09:40.813 Device Information : IOPS MiB/s Average min max 00:09:40.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001836.72 1000218.97 1005076.45 00:09:40.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002855.10 1000404.53 1008025.77 00:09:40.813 ======================================================== 00:09:40.813 Total : 256.00 0.12 1002345.91 1000218.97 1008025.77 00:09:40.813 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 722526 00:09:40.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (722526) - No such process 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 722526 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.813 rmmod nvme_tcp 00:09:40.813 rmmod nvme_fabrics 00:09:40.813 rmmod nvme_keyring 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 721492 ']' 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 721492 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 721492 ']' 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 721492 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.813 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 721492 00:09:41.073 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.073 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.073 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 721492' 00:09:41.073 killing process with pid 721492 00:09:41.073 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 721492 00:09:41.073 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 721492 00:09:41.073 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.073 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.073 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.074 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:41.074 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:41.074 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.074 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.074 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.074 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.074 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.074 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.074 12:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.615 12:53:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.615 00:09:43.615 real 0m18.388s 00:09:43.615 user 0m31.021s 00:09:43.615 sys 0m6.817s 00:09:43.615 12:53:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.615 12:53:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:43.615 ************************************ 00:09:43.616 END TEST nvmf_delete_subsystem 00:09:43.616 ************************************ 00:09:43.616 12:53:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:43.616 12:53:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.616 12:53:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.616 12:53:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.616 ************************************ 00:09:43.616 START TEST nvmf_host_management 00:09:43.616 ************************************ 00:09:43.616 12:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:43.616 * Looking for test storage... 00:09:43.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.616 12:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.616 12:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.616 12:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.616 --rc genhtml_branch_coverage=1 00:09:43.616 --rc genhtml_function_coverage=1 00:09:43.616 --rc genhtml_legend=1 00:09:43.616 --rc geninfo_all_blocks=1 00:09:43.616 --rc geninfo_unexecuted_blocks=1 00:09:43.616 00:09:43.616 ' 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.616 --rc genhtml_branch_coverage=1 00:09:43.616 --rc genhtml_function_coverage=1 00:09:43.616 --rc genhtml_legend=1 00:09:43.616 --rc geninfo_all_blocks=1 00:09:43.616 --rc geninfo_unexecuted_blocks=1 00:09:43.616 00:09:43.616 ' 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.616 --rc genhtml_branch_coverage=1 00:09:43.616 --rc genhtml_function_coverage=1 00:09:43.616 --rc genhtml_legend=1 00:09:43.616 --rc geninfo_all_blocks=1 00:09:43.616 --rc geninfo_unexecuted_blocks=1 00:09:43.616 00:09:43.616 ' 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.616 --rc genhtml_branch_coverage=1 00:09:43.616 --rc genhtml_function_coverage=1 00:09:43.616 --rc genhtml_legend=1 00:09:43.616 --rc geninfo_all_blocks=1 00:09:43.616 --rc geninfo_unexecuted_blocks=1 00:09:43.616 00:09:43.616 ' 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:43.616 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.617 12:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:51.806 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.806 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:51.806 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:51.807 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:51.807 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:09:51.807 00:09:51.807 --- 10.0.0.2 ping statistics --- 00:09:51.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.807 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:09:51.807 00:09:51.807 --- 10.0.0.1 ping statistics --- 00:09:51.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.807 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=727551 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 727551 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 727551 ']' 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.807 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.807 [2024-11-29 12:53:53.719203] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:09:51.807 [2024-11-29 12:53:53.719268] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.807 [2024-11-29 12:53:53.819880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.807 [2024-11-29 12:53:53.873661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.807 [2024-11-29 12:53:53.873718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.807 [2024-11-29 12:53:53.873727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.807 [2024-11-29 12:53:53.873739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.807 [2024-11-29 12:53:53.873746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.807 [2024-11-29 12:53:53.875810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.807 [2024-11-29 12:53:53.875973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.807 [2024-11-29 12:53:53.876138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.807 [2024-11-29 12:53:53.876138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.126 [2024-11-29 12:53:54.599032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.126 Malloc0 00:09:52.126 [2024-11-29 12:53:54.675724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=727851 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 727851 /var/tmp/bdevperf.sock 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 727851 ']' 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:52.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:52.126 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:52.127 { 00:09:52.127 "params": { 00:09:52.127 "name": "Nvme$subsystem", 00:09:52.127 "trtype": "$TEST_TRANSPORT", 00:09:52.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.127 "adrfam": "ipv4", 00:09:52.127 "trsvcid": "$NVMF_PORT", 00:09:52.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.127 "hdgst": ${hdgst:-false}, 00:09:52.127 "ddgst": ${ddgst:-false} 00:09:52.127 }, 00:09:52.127 "method": "bdev_nvme_attach_controller" 00:09:52.127 } 00:09:52.127 EOF 00:09:52.127 )") 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:52.127 12:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:52.127 "params": { 00:09:52.127 "name": "Nvme0", 00:09:52.127 "trtype": "tcp", 00:09:52.127 "traddr": "10.0.0.2", 00:09:52.127 "adrfam": "ipv4", 00:09:52.127 "trsvcid": "4420", 00:09:52.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:52.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:52.127 "hdgst": false, 00:09:52.127 "ddgst": false 00:09:52.127 }, 00:09:52.127 "method": "bdev_nvme_attach_controller" 00:09:52.127 }' 00:09:52.127 [2024-11-29 12:53:54.786137] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:09:52.127 [2024-11-29 12:53:54.786218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727851 ] 00:09:52.462 [2024-11-29 12:53:54.881686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.462 [2024-11-29 12:53:54.935490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.775 Running I/O for 10 seconds... 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:53.037 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=587 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 587 -ge 100 ']' 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.038 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.038 [2024-11-29 12:53:55.691649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.691993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.692216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebe150 is same with the state(6) to be set 00:09:53.038 [2024-11-29 12:53:55.694932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.038 [2024-11-29 12:53:55.694989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.038 [2024-11-29 12:53:55.695011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.038 [2024-11-29 12:53:55.695020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.038 [2024-11-29 12:53:55.695030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.038 [2024-11-29 12:53:55.695054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.038 [2024-11-29 12:53:55.695065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.038 [2024-11-29 12:53:55.695073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.038 [2024-11-29 12:53:55.695083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.038 [2024-11-29 12:53:55.695091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.039 [2024-11-29 12:53:55.695802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.039 [2024-11-29 12:53:55.695809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.695991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.695998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.696007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.696016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.696026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.696034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.696044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.696051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.696061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.696068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.696078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.696085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.696095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.696102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.696112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.696119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.696129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:53.040 [2024-11-29 12:53:55.696136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.040 [2024-11-29 12:53:55.696145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fee0 is same with the state(6) to be set 00:09:53.040 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.040 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:53.040 [2024-11-29 12:53:55.697472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:53.040 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.040 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:53.040 task offset: 90112 on job bdev=Nvme0n1 fails 00:09:53.040 00:09:53.040 Latency(us) 00:09:53.040 [2024-11-29T11:53:55.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.040 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:53.040 Job: Nvme0n1 ended in about 0.46 seconds with error 00:09:53.040 Verification LBA range: start 0x0 length 0x400 00:09:53.040 Nvme0n1 : 0.46 1415.62 88.48 137.90 0.00 40038.93 1740.80 38229.33 00:09:53.040 [2024-11-29T11:53:55.720Z] =================================================================================================================== 00:09:53.040 [2024-11-29T11:53:55.720Z] Total : 1415.62 88.48 137.90 0.00 40038.93 1740.80 38229.33 00:09:53.040 [2024-11-29 12:53:55.699732] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:53.040 [2024-11-29 12:53:55.699773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887010 (9): Bad file descriptor 00:09:53.040 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.040 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:53.040 [2024-11-29 12:53:55.713813] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 727851 00:09:54.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (727851) - No such process 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.424 { 00:09:54.424 "params": { 00:09:54.424 "name": "Nvme$subsystem", 00:09:54.424 "trtype": "$TEST_TRANSPORT", 00:09:54.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.424 "adrfam": "ipv4", 00:09:54.424 "trsvcid": "$NVMF_PORT", 00:09:54.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.424 "hdgst": ${hdgst:-false}, 00:09:54.424 "ddgst": ${ddgst:-false} 00:09:54.424 }, 00:09:54.424 "method": "bdev_nvme_attach_controller" 00:09:54.424 } 00:09:54.424 EOF 00:09:54.424 )") 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:54.424 12:53:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.424 "params": { 00:09:54.424 "name": "Nvme0", 00:09:54.424 "trtype": "tcp", 00:09:54.424 "traddr": "10.0.0.2", 00:09:54.424 "adrfam": "ipv4", 00:09:54.424 "trsvcid": "4420", 00:09:54.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:54.424 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:54.424 "hdgst": false, 00:09:54.424 "ddgst": false 00:09:54.424 }, 00:09:54.424 "method": "bdev_nvme_attach_controller" 00:09:54.424 }' 00:09:54.424 [2024-11-29 12:53:56.768427] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:09:54.424 [2024-11-29 12:53:56.768483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728283 ] 00:09:54.424 [2024-11-29 12:53:56.857250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.424 [2024-11-29 12:53:56.892798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.684 Running I/O for 1 seconds... 00:09:55.624 1792.00 IOPS, 112.00 MiB/s 00:09:55.624 Latency(us) 00:09:55.624 [2024-11-29T11:53:58.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.624 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:55.624 Verification LBA range: start 0x0 length 0x400 00:09:55.624 Nvme0n1 : 1.02 1820.84 113.80 0.00 0.00 34504.50 5761.71 31894.19 00:09:55.624 [2024-11-29T11:53:58.304Z] =================================================================================================================== 00:09:55.624 [2024-11-29T11:53:58.304Z] Total : 1820.84 113.80 0.00 0.00 34504.50 5761.71 31894.19 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.624 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.624 rmmod nvme_tcp 00:09:55.624 rmmod nvme_fabrics 00:09:55.885 rmmod nvme_keyring 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 727551 ']' 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 727551 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 727551 ']' 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 727551 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 727551 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 727551' 00:09:55.885 killing process with pid 727551 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 727551 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 727551 00:09:55.885 [2024-11-29 12:53:58.499084] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:55.885 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.886 12:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:58.435 00:09:58.435 real 0m14.785s 00:09:58.435 user 0m23.352s 00:09:58.435 sys 0m6.828s 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:58.435 ************************************ 00:09:58.435 END TEST nvmf_host_management 00:09:58.435 ************************************ 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.435 ************************************ 00:09:58.435 START TEST nvmf_lvol 00:09:58.435 ************************************ 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:58.435 * Looking for test storage... 00:09:58.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.435 --rc genhtml_branch_coverage=1 00:09:58.435 --rc genhtml_function_coverage=1 00:09:58.435 --rc genhtml_legend=1 00:09:58.435 --rc geninfo_all_blocks=1 00:09:58.435 --rc geninfo_unexecuted_blocks=1 00:09:58.435 00:09:58.435 ' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.435 --rc genhtml_branch_coverage=1 00:09:58.435 --rc genhtml_function_coverage=1 00:09:58.435 --rc genhtml_legend=1 00:09:58.435 --rc geninfo_all_blocks=1 00:09:58.435 --rc geninfo_unexecuted_blocks=1 00:09:58.435 00:09:58.435 ' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.435 --rc genhtml_branch_coverage=1 00:09:58.435 --rc genhtml_function_coverage=1 00:09:58.435 --rc genhtml_legend=1 00:09:58.435 --rc geninfo_all_blocks=1 00:09:58.435 --rc geninfo_unexecuted_blocks=1 00:09:58.435 00:09:58.435 ' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.435 --rc genhtml_branch_coverage=1 00:09:58.435 --rc genhtml_function_coverage=1 00:09:58.435 --rc genhtml_legend=1 00:09:58.435 --rc geninfo_all_blocks=1 00:09:58.435 --rc geninfo_unexecuted_blocks=1 00:09:58.435 00:09:58.435 ' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.435 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.436 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:06.588 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.588 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.588 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:06.589 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:06.589 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:06.589 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:06.589 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:10:06.589 00:10:06.589 --- 10.0.0.2 ping statistics --- 00:10:06.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.589 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:10:06.589 00:10:06.589 --- 10.0.0.1 ping statistics --- 00:10:06.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.589 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.589 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=732848 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 732848 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 732848 ']' 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.590 12:54:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:06.590 [2024-11-29 12:54:08.425879] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:10:06.590 [2024-11-29 12:54:08.425948] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.590 [2024-11-29 12:54:08.526072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.590 [2024-11-29 12:54:08.578773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.590 [2024-11-29 12:54:08.578824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.590 [2024-11-29 12:54:08.578833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.590 [2024-11-29 12:54:08.578841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.590 [2024-11-29 12:54:08.578847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.590 [2024-11-29 12:54:08.580687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.590 [2024-11-29 12:54:08.580847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.590 [2024-11-29 12:54:08.580848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.590 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.590 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:06.590 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.590 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.590 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:06.852 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.852 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:06.852 [2024-11-29 12:54:09.474679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.852 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.113 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:07.113 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.373 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:07.374 12:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:07.635 12:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:07.896 12:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a4e706fe-2f23-489d-b318-7ea0d9e3568a 00:10:07.896 12:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a4e706fe-2f23-489d-b318-7ea0d9e3568a lvol 20 00:10:07.896 12:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=124225b8-e77b-44c0-a676-f7a05b4a414a 00:10:07.896 12:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:08.157 12:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 124225b8-e77b-44c0-a676-f7a05b4a414a 00:10:08.418 12:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:08.678 [2024-11-29 12:54:11.123828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.678 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.678 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=733882 00:10:08.678 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:08.678 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:10.060 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 124225b8-e77b-44c0-a676-f7a05b4a414a MY_SNAPSHOT 00:10:10.060 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=75eb9083-9516-4387-a1bc-7bee88aee1f1 00:10:10.060 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 124225b8-e77b-44c0-a676-f7a05b4a414a 30 00:10:10.321 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 75eb9083-9516-4387-a1bc-7bee88aee1f1 MY_CLONE 00:10:10.321 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=83d7e369-c9cc-4ff8-ab64-7c9b27033113 00:10:10.321 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 83d7e369-c9cc-4ff8-ab64-7c9b27033113 00:10:10.893 12:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 733882 00:10:19.026 Initializing NVMe Controllers 00:10:19.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:19.026 Controller IO queue size 128, less than required. 00:10:19.026 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:19.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:19.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:19.026 Initialization complete. Launching workers. 00:10:19.026 ======================================================== 00:10:19.026 Latency(us) 00:10:19.026 Device Information : IOPS MiB/s Average min max 00:10:19.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16186.60 63.23 7907.60 1676.22 63587.90 00:10:19.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17168.30 67.06 7457.12 680.66 41697.20 00:10:19.026 ======================================================== 00:10:19.026 Total : 33354.90 130.29 7675.73 680.66 63587.90 00:10:19.026 00:10:19.287 12:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:19.287 12:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 124225b8-e77b-44c0-a676-f7a05b4a414a 00:10:19.548 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4e706fe-2f23-489d-b318-7ea0d9e3568a 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.809 rmmod nvme_tcp 00:10:19.809 rmmod nvme_fabrics 00:10:19.809 rmmod nvme_keyring 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 732848 ']' 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 732848 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 732848 ']' 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 732848 00:10:19.809 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:19.810 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.810 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 732848 00:10:19.810 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.810 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.810 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 732848' 00:10:19.810 killing process with pid 732848 00:10:19.810 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 732848 00:10:19.810 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 732848 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.071 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.981 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:21.981 00:10:21.981 real 0m23.916s 00:10:21.981 user 1m4.913s 00:10:21.981 sys 0m8.607s 00:10:21.981 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.981 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:21.981 ************************************ 00:10:21.981 END TEST nvmf_lvol 00:10:21.981 ************************************ 00:10:21.981 12:54:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:21.981 12:54:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:21.981 12:54:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.981 12:54:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.241 ************************************ 00:10:22.241 START TEST nvmf_lvs_grow 00:10:22.241 ************************************ 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:22.241 * Looking for test storage... 00:10:22.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.241 --rc genhtml_branch_coverage=1 00:10:22.241 --rc genhtml_function_coverage=1 00:10:22.241 --rc genhtml_legend=1 00:10:22.241 --rc geninfo_all_blocks=1 00:10:22.241 --rc geninfo_unexecuted_blocks=1 00:10:22.241 00:10:22.241 ' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.241 --rc genhtml_branch_coverage=1 00:10:22.241 --rc genhtml_function_coverage=1 00:10:22.241 --rc genhtml_legend=1 00:10:22.241 --rc geninfo_all_blocks=1 00:10:22.241 --rc geninfo_unexecuted_blocks=1 00:10:22.241 00:10:22.241 ' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.241 --rc genhtml_branch_coverage=1 00:10:22.241 --rc genhtml_function_coverage=1 00:10:22.241 --rc genhtml_legend=1 00:10:22.241 --rc geninfo_all_blocks=1 00:10:22.241 --rc geninfo_unexecuted_blocks=1 00:10:22.241 00:10:22.241 ' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.241 --rc genhtml_branch_coverage=1 00:10:22.241 --rc genhtml_function_coverage=1 00:10:22.241 --rc genhtml_legend=1 00:10:22.241 --rc geninfo_all_blocks=1 00:10:22.241 --rc geninfo_unexecuted_blocks=1 00:10:22.241 00:10:22.241 ' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:22.241 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:22.242 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.242 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.242 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.242 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.242 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.242 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.242 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.242 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.502 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:22.502 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:22.502 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.502 12:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.635 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:30.636 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:30.636 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:30.636 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:30.636 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:10:30.636 00:10:30.636 --- 10.0.0.2 ping statistics --- 00:10:30.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.636 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:10:30.636 00:10:30.636 --- 10.0.0.1 ping statistics --- 00:10:30.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.636 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=740419 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 740419 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 740419 ']' 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.636 12:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.636 [2024-11-29 12:54:32.506571] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:10:30.636 [2024-11-29 12:54:32.506636] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.636 [2024-11-29 12:54:32.605989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.636 [2024-11-29 12:54:32.657271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.636 [2024-11-29 12:54:32.657327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.636 [2024-11-29 12:54:32.657338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.636 [2024-11-29 12:54:32.657345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.636 [2024-11-29 12:54:32.657351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.636 [2024-11-29 12:54:32.658090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:30.898 [2024-11-29 12:54:33.538854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.898 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:31.158 ************************************ 00:10:31.159 START TEST lvs_grow_clean 00:10:31.159 ************************************ 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.159 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:31.419 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:31.419 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:31.419 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:31.420 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:31.420 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:31.681 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:31.681 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:31.681 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc lvol 150 00:10:31.942 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=201d7854-a32d-48ba-a189-37e78f99d27a 00:10:31.942 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.942 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:31.942 [2024-11-29 12:54:34.539597] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:31.942 [2024-11-29 12:54:34.539676] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:31.942 true 00:10:31.942 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:31.942 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:32.210 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:32.210 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:32.574 12:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 201d7854-a32d-48ba-a189-37e78f99d27a 00:10:32.574 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:32.835 [2024-11-29 12:54:35.294022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=741008 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 741008 /var/tmp/bdevperf.sock 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 741008 ']' 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:32.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.835 12:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:33.097 [2024-11-29 12:54:35.533436] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:10:33.097 [2024-11-29 12:54:35.533507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741008 ] 00:10:33.097 [2024-11-29 12:54:35.624613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.097 [2024-11-29 12:54:35.677642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.041 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.041 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:34.041 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:34.041 Nvme0n1 00:10:34.041 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:34.302 [ 00:10:34.302 { 00:10:34.302 "name": "Nvme0n1", 00:10:34.302 "aliases": [ 00:10:34.302 "201d7854-a32d-48ba-a189-37e78f99d27a" 00:10:34.302 ], 00:10:34.302 "product_name": "NVMe disk", 00:10:34.302 "block_size": 4096, 00:10:34.302 "num_blocks": 38912, 00:10:34.302 "uuid": "201d7854-a32d-48ba-a189-37e78f99d27a", 00:10:34.302 "numa_id": 0, 00:10:34.302 "assigned_rate_limits": { 00:10:34.302 "rw_ios_per_sec": 0, 00:10:34.302 "rw_mbytes_per_sec": 0, 00:10:34.302 "r_mbytes_per_sec": 0, 00:10:34.302 "w_mbytes_per_sec": 0 00:10:34.302 }, 00:10:34.302 "claimed": false, 00:10:34.302 "zoned": false, 00:10:34.302 "supported_io_types": { 00:10:34.302 "read": true, 00:10:34.302 "write": true, 00:10:34.302 "unmap": true, 00:10:34.302 "flush": true, 00:10:34.302 "reset": true, 00:10:34.302 "nvme_admin": true, 00:10:34.302 "nvme_io": true, 00:10:34.302 "nvme_io_md": false, 00:10:34.302 "write_zeroes": true, 00:10:34.302 "zcopy": false, 00:10:34.302 "get_zone_info": false, 00:10:34.302 "zone_management": false, 00:10:34.302 "zone_append": false, 00:10:34.302 "compare": true, 00:10:34.302 "compare_and_write": true, 00:10:34.302 "abort": true, 00:10:34.302 "seek_hole": false, 00:10:34.302 "seek_data": false, 00:10:34.302 "copy": true, 00:10:34.302 "nvme_iov_md": false 00:10:34.302 }, 00:10:34.302 "memory_domains": [ 00:10:34.302 { 00:10:34.302 "dma_device_id": "system", 00:10:34.302 "dma_device_type": 1 00:10:34.302 } 00:10:34.302 ], 00:10:34.302 "driver_specific": { 00:10:34.302 "nvme": [ 00:10:34.302 { 00:10:34.302 "trid": { 00:10:34.302 "trtype": "TCP", 00:10:34.302 "adrfam": "IPv4", 00:10:34.302 "traddr": "10.0.0.2", 00:10:34.302 "trsvcid": "4420", 00:10:34.302 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:34.302 }, 00:10:34.302 "ctrlr_data": { 00:10:34.302 "cntlid": 1, 00:10:34.302 "vendor_id": "0x8086", 00:10:34.302 "model_number": "SPDK bdev Controller", 00:10:34.302 "serial_number": "SPDK0", 00:10:34.302 "firmware_revision": "25.01", 00:10:34.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:34.302 "oacs": { 00:10:34.302 "security": 0, 00:10:34.302 "format": 0, 00:10:34.302 "firmware": 0, 00:10:34.302 "ns_manage": 0 00:10:34.302 }, 00:10:34.302 "multi_ctrlr": true, 00:10:34.302 "ana_reporting": false 00:10:34.302 }, 00:10:34.302 "vs": { 00:10:34.302 "nvme_version": "1.3" 00:10:34.302 }, 00:10:34.302 "ns_data": { 00:10:34.302 "id": 1, 00:10:34.302 "can_share": true 00:10:34.302 } 00:10:34.302 } 00:10:34.302 ], 00:10:34.302 "mp_policy": "active_passive" 00:10:34.302 } 00:10:34.302 } 00:10:34.302 ] 00:10:34.302 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=741344 00:10:34.302 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:34.302 12:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:34.302 Running I/O for 10 seconds... 00:10:35.688 Latency(us) 00:10:35.688 [2024-11-29T11:54:38.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.688 Nvme0n1 : 1.00 23637.00 92.33 0.00 0.00 0.00 0.00 0.00 00:10:35.688 [2024-11-29T11:54:38.368Z] =================================================================================================================== 00:10:35.688 [2024-11-29T11:54:38.368Z] Total : 23637.00 92.33 0.00 0.00 0.00 0.00 0.00 00:10:35.688 00:10:36.261 12:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:36.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.523 Nvme0n1 : 2.00 23782.50 92.90 0.00 0.00 0.00 0.00 0.00 00:10:36.523 [2024-11-29T11:54:39.203Z] =================================================================================================================== 00:10:36.523 [2024-11-29T11:54:39.203Z] Total : 23782.50 92.90 0.00 0.00 0.00 0.00 0.00 00:10:36.523 00:10:36.523 true 00:10:36.523 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:36.523 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:36.784 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:36.784 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:36.784 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 741344 00:10:37.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.355 Nvme0n1 : 3.00 23844.33 93.14 0.00 0.00 0.00 0.00 0.00 00:10:37.355 [2024-11-29T11:54:40.035Z] =================================================================================================================== 00:10:37.355 [2024-11-29T11:54:40.035Z] Total : 23844.33 93.14 0.00 0.00 0.00 0.00 0.00 00:10:37.355 00:10:38.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.297 Nvme0n1 : 4.00 23887.25 93.31 0.00 0.00 0.00 0.00 0.00 00:10:38.297 [2024-11-29T11:54:40.977Z] =================================================================================================================== 00:10:38.297 [2024-11-29T11:54:40.977Z] Total : 23887.25 93.31 0.00 0.00 0.00 0.00 0.00 00:10:38.297 00:10:39.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.681 Nvme0n1 : 5.00 23929.00 93.47 0.00 0.00 0.00 0.00 0.00 00:10:39.681 [2024-11-29T11:54:42.361Z] =================================================================================================================== 00:10:39.681 [2024-11-29T11:54:42.361Z] Total : 23929.00 93.47 0.00 0.00 0.00 0.00 0.00 00:10:39.681 00:10:40.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.623 Nvme0n1 : 6.00 23964.83 93.61 0.00 0.00 0.00 0.00 0.00 00:10:40.623 [2024-11-29T11:54:43.303Z] =================================================================================================================== 00:10:40.623 [2024-11-29T11:54:43.303Z] Total : 23964.83 93.61 0.00 0.00 0.00 0.00 0.00 00:10:40.623 00:10:41.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.564 Nvme0n1 : 7.00 23995.00 93.73 0.00 0.00 0.00 0.00 0.00 00:10:41.564 [2024-11-29T11:54:44.244Z] =================================================================================================================== 00:10:41.564 [2024-11-29T11:54:44.244Z] Total : 23995.00 93.73 0.00 0.00 0.00 0.00 0.00 00:10:41.564 00:10:42.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.508 Nvme0n1 : 8.00 24015.62 93.81 0.00 0.00 0.00 0.00 0.00 00:10:42.508 [2024-11-29T11:54:45.188Z] =================================================================================================================== 00:10:42.508 [2024-11-29T11:54:45.188Z] Total : 24015.62 93.81 0.00 0.00 0.00 0.00 0.00 00:10:42.508 00:10:43.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.447 Nvme0n1 : 9.00 24036.11 93.89 0.00 0.00 0.00 0.00 0.00 00:10:43.447 [2024-11-29T11:54:46.127Z] =================================================================================================================== 00:10:43.447 [2024-11-29T11:54:46.127Z] Total : 24036.11 93.89 0.00 0.00 0.00 0.00 0.00 00:10:43.447 00:10:44.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.388 Nvme0n1 : 10.00 24051.70 93.95 0.00 0.00 0.00 0.00 0.00 00:10:44.388 [2024-11-29T11:54:47.068Z] =================================================================================================================== 00:10:44.388 [2024-11-29T11:54:47.068Z] Total : 24051.70 93.95 0.00 0.00 0.00 0.00 0.00 00:10:44.388 00:10:44.388 00:10:44.388 Latency(us) 00:10:44.389 [2024-11-29T11:54:47.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.389 Nvme0n1 : 10.00 24051.72 93.95 0.00 0.00 5317.68 3686.40 9611.95 00:10:44.389 [2024-11-29T11:54:47.069Z] =================================================================================================================== 00:10:44.389 [2024-11-29T11:54:47.069Z] Total : 24051.72 93.95 0.00 0.00 5317.68 3686.40 9611.95 00:10:44.389 { 00:10:44.389 "results": [ 00:10:44.389 { 00:10:44.389 "job": "Nvme0n1", 00:10:44.389 "core_mask": "0x2", 00:10:44.389 "workload": "randwrite", 00:10:44.389 "status": "finished", 00:10:44.389 "queue_depth": 128, 00:10:44.389 "io_size": 4096, 00:10:44.389 "runtime": 10.00498, 00:10:44.389 "iops": 24051.72224232332, 00:10:44.389 "mibps": 93.95204000907547, 00:10:44.389 "io_failed": 0, 00:10:44.389 "io_timeout": 0, 00:10:44.389 "avg_latency_us": 5317.683088580171, 00:10:44.389 "min_latency_us": 3686.4, 00:10:44.389 "max_latency_us": 9611.946666666667 00:10:44.389 } 00:10:44.389 ], 00:10:44.389 "core_count": 1 00:10:44.389 } 00:10:44.389 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 741008 00:10:44.389 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 741008 ']' 00:10:44.389 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 741008 00:10:44.389 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:44.389 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.389 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 741008 00:10:44.649 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:44.649 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:44.649 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 741008' 00:10:44.649 killing process with pid 741008 00:10:44.649 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 741008 00:10:44.649 Received shutdown signal, test time was about 10.000000 seconds 00:10:44.649 00:10:44.649 Latency(us) 00:10:44.649 [2024-11-29T11:54:47.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.649 [2024-11-29T11:54:47.329Z] =================================================================================================================== 00:10:44.649 [2024-11-29T11:54:47.329Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:44.649 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 741008 00:10:44.649 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:44.909 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:44.909 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:44.909 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:45.169 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:45.169 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:45.169 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:45.430 [2024-11-29 12:54:47.863381] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:45.430 12:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:45.430 request: 00:10:45.430 { 00:10:45.430 "uuid": "21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc", 00:10:45.430 "method": "bdev_lvol_get_lvstores", 00:10:45.430 "req_id": 1 00:10:45.430 } 00:10:45.430 Got JSON-RPC error response 00:10:45.430 response: 00:10:45.430 { 00:10:45.430 "code": -19, 00:10:45.430 "message": "No such device" 00:10:45.430 } 00:10:45.430 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:45.430 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:45.430 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:45.430 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:45.430 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:45.690 aio_bdev 00:10:45.690 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 201d7854-a32d-48ba-a189-37e78f99d27a 00:10:45.690 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=201d7854-a32d-48ba-a189-37e78f99d27a 00:10:45.690 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.690 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:45.690 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.690 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.690 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:45.950 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 201d7854-a32d-48ba-a189-37e78f99d27a -t 2000 00:10:45.951 [ 00:10:45.951 { 00:10:45.951 "name": "201d7854-a32d-48ba-a189-37e78f99d27a", 00:10:45.951 "aliases": [ 00:10:45.951 "lvs/lvol" 00:10:45.951 ], 00:10:45.951 "product_name": "Logical Volume", 00:10:45.951 "block_size": 4096, 00:10:45.951 "num_blocks": 38912, 00:10:45.951 "uuid": "201d7854-a32d-48ba-a189-37e78f99d27a", 00:10:45.951 "assigned_rate_limits": { 00:10:45.951 "rw_ios_per_sec": 0, 00:10:45.951 "rw_mbytes_per_sec": 0, 00:10:45.951 "r_mbytes_per_sec": 0, 00:10:45.951 "w_mbytes_per_sec": 0 00:10:45.951 }, 00:10:45.951 "claimed": false, 00:10:45.951 "zoned": false, 00:10:45.951 "supported_io_types": { 00:10:45.951 "read": true, 00:10:45.951 "write": true, 00:10:45.951 "unmap": true, 00:10:45.951 "flush": false, 00:10:45.951 "reset": true, 00:10:45.951 "nvme_admin": false, 00:10:45.951 "nvme_io": false, 00:10:45.951 "nvme_io_md": false, 00:10:45.951 "write_zeroes": true, 00:10:45.951 "zcopy": false, 00:10:45.951 "get_zone_info": false, 00:10:45.951 "zone_management": false, 00:10:45.951 "zone_append": false, 00:10:45.951 "compare": false, 00:10:45.951 "compare_and_write": false, 00:10:45.951 "abort": false, 00:10:45.951 "seek_hole": true, 00:10:45.951 "seek_data": true, 00:10:45.951 "copy": false, 00:10:45.951 "nvme_iov_md": false 00:10:45.951 }, 00:10:45.951 "driver_specific": { 00:10:45.951 "lvol": { 00:10:45.951 "lvol_store_uuid": "21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc", 00:10:45.951 "base_bdev": "aio_bdev", 00:10:45.951 "thin_provision": false, 00:10:45.951 "num_allocated_clusters": 38, 00:10:45.951 "snapshot": false, 00:10:45.951 "clone": false, 00:10:45.951 "esnap_clone": false 00:10:45.951 } 00:10:45.951 } 00:10:45.951 } 00:10:45.951 ] 00:10:45.951 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:45.951 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:45.951 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:46.211 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:46.211 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:46.211 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:46.471 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:46.471 12:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 201d7854-a32d-48ba-a189-37e78f99d27a 00:10:46.471 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21213f3d-3a89-4a7b-aa3d-bb78c00ef3dc 00:10:46.731 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:46.992 00:10:46.992 real 0m15.907s 00:10:46.992 user 0m15.547s 00:10:46.992 sys 0m1.498s 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:46.992 ************************************ 00:10:46.992 END TEST lvs_grow_clean 00:10:46.992 ************************************ 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:46.992 ************************************ 00:10:46.992 START TEST lvs_grow_dirty 00:10:46.992 ************************************ 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:46.992 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:47.253 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:47.253 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:47.513 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5366d0e3-15c8-4206-9eba-c6014aadd211 00:10:47.513 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:10:47.513 12:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:47.513 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:47.513 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:47.513 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5366d0e3-15c8-4206-9eba-c6014aadd211 lvol 150 00:10:47.774 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c6baf286-1b6d-4af5-ab42-dbe17fa13c78 00:10:47.774 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:47.774 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:48.035 [2024-11-29 12:54:50.481806] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:48.035 [2024-11-29 12:54:50.481851] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:48.035 true 00:10:48.035 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:10:48.035 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:48.035 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:48.035 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:48.295 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6baf286-1b6d-4af5-ab42-dbe17fa13c78 00:10:48.556 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:48.556 [2024-11-29 12:54:51.139686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.556 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=744287 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 744287 /var/tmp/bdevperf.sock 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 744287 ']' 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:48.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.818 12:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:48.818 [2024-11-29 12:54:51.367931] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:10:48.818 [2024-11-29 12:54:51.367983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744287 ] 00:10:48.818 [2024-11-29 12:54:51.451202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.818 [2024-11-29 12:54:51.481062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.762 12:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.762 12:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:49.762 12:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:50.023 Nvme0n1 00:10:50.023 12:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:50.284 [ 00:10:50.284 { 00:10:50.284 "name": "Nvme0n1", 00:10:50.284 "aliases": [ 00:10:50.284 "c6baf286-1b6d-4af5-ab42-dbe17fa13c78" 00:10:50.284 ], 00:10:50.284 "product_name": "NVMe disk", 00:10:50.284 "block_size": 4096, 00:10:50.284 "num_blocks": 38912, 00:10:50.284 "uuid": "c6baf286-1b6d-4af5-ab42-dbe17fa13c78", 00:10:50.284 "numa_id": 0, 00:10:50.284 "assigned_rate_limits": { 00:10:50.284 "rw_ios_per_sec": 0, 00:10:50.284 "rw_mbytes_per_sec": 0, 00:10:50.284 "r_mbytes_per_sec": 0, 00:10:50.284 "w_mbytes_per_sec": 0 00:10:50.284 }, 00:10:50.284 "claimed": false, 00:10:50.284 "zoned": false, 00:10:50.284 "supported_io_types": { 00:10:50.284 "read": true, 00:10:50.284 "write": true, 00:10:50.284 "unmap": true, 00:10:50.284 "flush": true, 00:10:50.284 "reset": true, 00:10:50.284 "nvme_admin": true, 00:10:50.284 "nvme_io": true, 00:10:50.284 "nvme_io_md": false, 00:10:50.284 "write_zeroes": true, 00:10:50.284 "zcopy": false, 00:10:50.284 "get_zone_info": false, 00:10:50.284 "zone_management": false, 00:10:50.284 "zone_append": false, 00:10:50.284 "compare": true, 00:10:50.284 "compare_and_write": true, 00:10:50.284 "abort": true, 00:10:50.284 "seek_hole": false, 00:10:50.284 "seek_data": false, 00:10:50.284 "copy": true, 00:10:50.284 "nvme_iov_md": false 00:10:50.284 }, 00:10:50.284 "memory_domains": [ 00:10:50.284 { 00:10:50.284 "dma_device_id": "system", 00:10:50.284 "dma_device_type": 1 00:10:50.284 } 00:10:50.284 ], 00:10:50.285 "driver_specific": { 00:10:50.285 "nvme": [ 00:10:50.285 { 00:10:50.285 "trid": { 00:10:50.285 "trtype": "TCP", 00:10:50.285 "adrfam": "IPv4", 00:10:50.285 "traddr": "10.0.0.2", 00:10:50.285 "trsvcid": "4420", 00:10:50.285 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:50.285 }, 00:10:50.285 "ctrlr_data": { 00:10:50.285 "cntlid": 1, 00:10:50.285 "vendor_id": "0x8086", 00:10:50.285 "model_number": "SPDK bdev Controller", 00:10:50.285 "serial_number": "SPDK0", 00:10:50.285 "firmware_revision": "25.01", 00:10:50.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:50.285 "oacs": { 00:10:50.285 "security": 0, 00:10:50.285 "format": 0, 00:10:50.285 "firmware": 0, 00:10:50.285 "ns_manage": 0 00:10:50.285 }, 00:10:50.285 "multi_ctrlr": true, 00:10:50.285 "ana_reporting": false 00:10:50.285 }, 00:10:50.285 "vs": { 00:10:50.285 "nvme_version": "1.3" 00:10:50.285 }, 00:10:50.285 "ns_data": { 00:10:50.285 "id": 1, 00:10:50.285 "can_share": true 00:10:50.285 } 00:10:50.285 } 00:10:50.285 ], 00:10:50.285 "mp_policy": "active_passive" 00:10:50.285 } 00:10:50.285 } 00:10:50.285 ] 00:10:50.285 12:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=744479 00:10:50.285 12:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:50.285 12:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:50.285 Running I/O for 10 seconds... 00:10:51.227 Latency(us) 00:10:51.227 [2024-11-29T11:54:53.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.227 Nvme0n1 : 1.00 24918.00 97.34 0.00 0.00 0.00 0.00 0.00 00:10:51.227 [2024-11-29T11:54:53.907Z] =================================================================================================================== 00:10:51.227 [2024-11-29T11:54:53.907Z] Total : 24918.00 97.34 0.00 0.00 0.00 0.00 0.00 00:10:51.227 00:10:52.169 12:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:10:52.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.169 Nvme0n1 : 2.00 25035.00 97.79 0.00 0.00 0.00 0.00 0.00 00:10:52.169 [2024-11-29T11:54:54.849Z] =================================================================================================================== 00:10:52.169 [2024-11-29T11:54:54.849Z] Total : 25035.00 97.79 0.00 0.00 0.00 0.00 0.00 00:10:52.169 00:10:52.429 true 00:10:52.429 12:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:10:52.429 12:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:52.429 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:52.429 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:52.429 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 744479 00:10:53.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.367 Nvme0n1 : 3.00 25093.67 98.02 0.00 0.00 0.00 0.00 0.00 00:10:53.367 [2024-11-29T11:54:56.047Z] =================================================================================================================== 00:10:53.367 [2024-11-29T11:54:56.047Z] Total : 25093.67 98.02 0.00 0.00 0.00 0.00 0.00 00:10:53.367 00:10:54.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.308 Nvme0n1 : 4.00 25139.50 98.20 0.00 0.00 0.00 0.00 0.00 00:10:54.308 [2024-11-29T11:54:56.988Z] =================================================================================================================== 00:10:54.308 [2024-11-29T11:54:56.988Z] Total : 25139.50 98.20 0.00 0.00 0.00 0.00 0.00 00:10:54.308 00:10:55.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.246 Nvme0n1 : 5.00 25167.60 98.31 0.00 0.00 0.00 0.00 0.00 00:10:55.246 [2024-11-29T11:54:57.926Z] =================================================================================================================== 00:10:55.246 [2024-11-29T11:54:57.926Z] Total : 25167.60 98.31 0.00 0.00 0.00 0.00 0.00 00:10:55.246 00:10:56.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.187 Nvme0n1 : 6.00 25196.83 98.43 0.00 0.00 0.00 0.00 0.00 00:10:56.187 [2024-11-29T11:54:58.867Z] =================================================================================================================== 00:10:56.187 [2024-11-29T11:54:58.867Z] Total : 25196.83 98.43 0.00 0.00 0.00 0.00 0.00 00:10:56.187 00:10:57.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.573 Nvme0n1 : 7.00 25217.71 98.51 0.00 0.00 0.00 0.00 0.00 00:10:57.573 [2024-11-29T11:55:00.253Z] =================================================================================================================== 00:10:57.573 [2024-11-29T11:55:00.253Z] Total : 25217.71 98.51 0.00 0.00 0.00 0.00 0.00 00:10:57.573 00:10:58.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.516 Nvme0n1 : 8.00 25232.62 98.56 0.00 0.00 0.00 0.00 0.00 00:10:58.516 [2024-11-29T11:55:01.196Z] =================================================================================================================== 00:10:58.516 [2024-11-29T11:55:01.196Z] Total : 25232.62 98.56 0.00 0.00 0.00 0.00 0.00 00:10:58.516 00:10:59.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.456 Nvme0n1 : 9.00 25230.56 98.56 0.00 0.00 0.00 0.00 0.00 00:10:59.456 [2024-11-29T11:55:02.136Z] =================================================================================================================== 00:10:59.456 [2024-11-29T11:55:02.136Z] Total : 25230.56 98.56 0.00 0.00 0.00 0.00 0.00 00:10:59.456 00:11:00.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.422 Nvme0n1 : 10.00 25241.90 98.60 0.00 0.00 0.00 0.00 0.00 00:11:00.422 [2024-11-29T11:55:03.102Z] =================================================================================================================== 00:11:00.422 [2024-11-29T11:55:03.102Z] Total : 25241.90 98.60 0.00 0.00 0.00 0.00 0.00 00:11:00.422 00:11:00.422 00:11:00.422 Latency(us) 00:11:00.422 [2024-11-29T11:55:03.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.422 Nvme0n1 : 10.00 25244.30 98.61 0.00 0.00 5067.23 1590.61 9120.43 00:11:00.422 [2024-11-29T11:55:03.102Z] =================================================================================================================== 00:11:00.422 [2024-11-29T11:55:03.102Z] Total : 25244.30 98.61 0.00 0.00 5067.23 1590.61 9120.43 00:11:00.422 { 00:11:00.422 "results": [ 00:11:00.422 { 00:11:00.422 "job": "Nvme0n1", 00:11:00.422 "core_mask": "0x2", 00:11:00.422 "workload": "randwrite", 00:11:00.422 "status": "finished", 00:11:00.422 "queue_depth": 128, 00:11:00.422 "io_size": 4096, 00:11:00.422 "runtime": 10.00412, 00:11:00.422 "iops": 25244.299348668348, 00:11:00.422 "mibps": 98.61054433073573, 00:11:00.422 "io_failed": 0, 00:11:00.422 "io_timeout": 0, 00:11:00.422 "avg_latency_us": 5067.225752777372, 00:11:00.422 "min_latency_us": 1590.6133333333332, 00:11:00.422 "max_latency_us": 9120.426666666666 00:11:00.422 } 00:11:00.422 ], 00:11:00.422 "core_count": 1 00:11:00.422 } 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 744287 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 744287 ']' 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 744287 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 744287 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 744287' 00:11:00.422 killing process with pid 744287 00:11:00.422 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 744287 00:11:00.422 Received shutdown signal, test time was about 10.000000 seconds 00:11:00.422 00:11:00.422 Latency(us) 00:11:00.422 [2024-11-29T11:55:03.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.422 [2024-11-29T11:55:03.102Z] =================================================================================================================== 00:11:00.422 [2024-11-29T11:55:03.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:00.423 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 744287 00:11:00.423 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:00.683 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 740419 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 740419 00:11:00.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 740419 Killed "${NVMF_APP[@]}" "$@" 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=746796 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 746796 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 746796 ']' 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.945 12:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.206 [2024-11-29 12:55:03.659278] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:11:01.206 [2024-11-29 12:55:03.659337] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.206 [2024-11-29 12:55:03.750295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.206 [2024-11-29 12:55:03.779555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.206 [2024-11-29 12:55:03.779582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.206 [2024-11-29 12:55:03.779588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.206 [2024-11-29 12:55:03.779596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.206 [2024-11-29 12:55:03.779600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.206 [2024-11-29 12:55:03.780044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.777 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.777 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:01.777 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.777 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.777 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:02.038 [2024-11-29 12:55:04.638546] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:02.038 [2024-11-29 12:55:04.638619] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:02.038 [2024-11-29 12:55:04.638642] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c6baf286-1b6d-4af5-ab42-dbe17fa13c78 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c6baf286-1b6d-4af5-ab42-dbe17fa13c78 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.038 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:02.299 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6baf286-1b6d-4af5-ab42-dbe17fa13c78 -t 2000 00:11:02.559 [ 00:11:02.559 { 00:11:02.559 "name": "c6baf286-1b6d-4af5-ab42-dbe17fa13c78", 00:11:02.559 "aliases": [ 00:11:02.559 "lvs/lvol" 00:11:02.559 ], 00:11:02.559 "product_name": "Logical Volume", 00:11:02.559 "block_size": 4096, 00:11:02.559 "num_blocks": 38912, 00:11:02.559 "uuid": "c6baf286-1b6d-4af5-ab42-dbe17fa13c78", 00:11:02.559 "assigned_rate_limits": { 00:11:02.559 "rw_ios_per_sec": 0, 00:11:02.559 "rw_mbytes_per_sec": 0, 00:11:02.559 "r_mbytes_per_sec": 0, 00:11:02.559 "w_mbytes_per_sec": 0 00:11:02.559 }, 00:11:02.559 "claimed": false, 00:11:02.559 "zoned": false, 00:11:02.559 "supported_io_types": { 00:11:02.559 "read": true, 00:11:02.559 "write": true, 00:11:02.559 "unmap": true, 00:11:02.559 "flush": false, 00:11:02.559 "reset": true, 00:11:02.559 "nvme_admin": false, 00:11:02.559 "nvme_io": false, 00:11:02.559 "nvme_io_md": false, 00:11:02.559 "write_zeroes": true, 00:11:02.559 "zcopy": false, 00:11:02.559 "get_zone_info": false, 00:11:02.559 "zone_management": false, 00:11:02.559 "zone_append": false, 00:11:02.559 "compare": false, 00:11:02.559 "compare_and_write": false, 00:11:02.559 "abort": false, 00:11:02.559 "seek_hole": true, 00:11:02.559 "seek_data": true, 00:11:02.559 "copy": false, 00:11:02.559 "nvme_iov_md": false 00:11:02.559 }, 00:11:02.559 "driver_specific": { 00:11:02.559 "lvol": { 00:11:02.559 "lvol_store_uuid": "5366d0e3-15c8-4206-9eba-c6014aadd211", 00:11:02.559 "base_bdev": "aio_bdev", 00:11:02.559 "thin_provision": false, 00:11:02.559 "num_allocated_clusters": 38, 00:11:02.559 "snapshot": false, 00:11:02.559 "clone": false, 00:11:02.559 "esnap_clone": false 00:11:02.559 } 00:11:02.559 } 00:11:02.559 } 00:11:02.559 ] 00:11:02.559 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:02.559 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:11:02.559 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:02.559 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:02.559 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:11:02.559 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:02.818 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:02.818 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:02.818 [2024-11-29 12:55:05.495200] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:11:03.078 request: 00:11:03.078 { 00:11:03.078 "uuid": "5366d0e3-15c8-4206-9eba-c6014aadd211", 00:11:03.078 "method": "bdev_lvol_get_lvstores", 00:11:03.078 "req_id": 1 00:11:03.078 } 00:11:03.078 Got JSON-RPC error response 00:11:03.078 response: 00:11:03.078 { 00:11:03.078 "code": -19, 00:11:03.078 "message": "No such device" 00:11:03.078 } 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:03.078 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:03.339 aio_bdev 00:11:03.339 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c6baf286-1b6d-4af5-ab42-dbe17fa13c78 00:11:03.339 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c6baf286-1b6d-4af5-ab42-dbe17fa13c78 00:11:03.339 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.339 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:03.339 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.339 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.339 12:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:03.600 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6baf286-1b6d-4af5-ab42-dbe17fa13c78 -t 2000 00:11:03.600 [ 00:11:03.600 { 00:11:03.600 "name": "c6baf286-1b6d-4af5-ab42-dbe17fa13c78", 00:11:03.600 "aliases": [ 00:11:03.600 "lvs/lvol" 00:11:03.600 ], 00:11:03.600 "product_name": "Logical Volume", 00:11:03.600 "block_size": 4096, 00:11:03.600 "num_blocks": 38912, 00:11:03.600 "uuid": "c6baf286-1b6d-4af5-ab42-dbe17fa13c78", 00:11:03.600 "assigned_rate_limits": { 00:11:03.600 "rw_ios_per_sec": 0, 00:11:03.600 "rw_mbytes_per_sec": 0, 00:11:03.600 "r_mbytes_per_sec": 0, 00:11:03.600 "w_mbytes_per_sec": 0 00:11:03.600 }, 00:11:03.600 "claimed": false, 00:11:03.600 "zoned": false, 00:11:03.600 "supported_io_types": { 00:11:03.600 "read": true, 00:11:03.600 "write": true, 00:11:03.600 "unmap": true, 00:11:03.600 "flush": false, 00:11:03.600 "reset": true, 00:11:03.600 "nvme_admin": false, 00:11:03.600 "nvme_io": false, 00:11:03.600 "nvme_io_md": false, 00:11:03.600 "write_zeroes": true, 00:11:03.600 "zcopy": false, 00:11:03.600 "get_zone_info": false, 00:11:03.600 "zone_management": false, 00:11:03.600 "zone_append": false, 00:11:03.600 "compare": false, 00:11:03.600 "compare_and_write": false, 00:11:03.600 "abort": false, 00:11:03.600 "seek_hole": true, 00:11:03.600 "seek_data": true, 00:11:03.600 "copy": false, 00:11:03.600 "nvme_iov_md": false 00:11:03.600 }, 00:11:03.600 "driver_specific": { 00:11:03.600 "lvol": { 00:11:03.600 "lvol_store_uuid": "5366d0e3-15c8-4206-9eba-c6014aadd211", 00:11:03.600 "base_bdev": "aio_bdev", 00:11:03.600 "thin_provision": false, 00:11:03.600 "num_allocated_clusters": 38, 00:11:03.600 "snapshot": false, 00:11:03.600 "clone": false, 00:11:03.600 "esnap_clone": false 00:11:03.600 } 00:11:03.600 } 00:11:03.600 } 00:11:03.600 ] 00:11:03.600 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:03.600 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:11:03.600 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:03.862 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:03.862 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:11:03.862 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:04.123 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:04.123 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6baf286-1b6d-4af5-ab42-dbe17fa13c78 00:11:04.123 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5366d0e3-15c8-4206-9eba-c6014aadd211 00:11:04.383 12:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:04.644 00:11:04.644 real 0m17.570s 00:11:04.644 user 0m45.668s 00:11:04.644 sys 0m3.205s 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:04.644 ************************************ 00:11:04.644 END TEST lvs_grow_dirty 00:11:04.644 ************************************ 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:04.644 nvmf_trace.0 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.644 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:04.644 rmmod nvme_tcp 00:11:04.644 rmmod nvme_fabrics 00:11:04.644 rmmod nvme_keyring 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 746796 ']' 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 746796 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 746796 ']' 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 746796 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 746796 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 746796' 00:11:04.905 killing process with pid 746796 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 746796 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 746796 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.905 12:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:07.534 00:11:07.534 real 0m44.908s 00:11:07.534 user 1m7.653s 00:11:07.534 sys 0m10.838s 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:07.534 ************************************ 00:11:07.534 END TEST nvmf_lvs_grow 00:11:07.534 ************************************ 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.534 ************************************ 00:11:07.534 START TEST nvmf_bdev_io_wait 00:11:07.534 ************************************ 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:07.534 * Looking for test storage... 00:11:07.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.534 --rc genhtml_branch_coverage=1 00:11:07.534 --rc genhtml_function_coverage=1 00:11:07.534 --rc genhtml_legend=1 00:11:07.534 --rc geninfo_all_blocks=1 00:11:07.534 --rc geninfo_unexecuted_blocks=1 00:11:07.534 00:11:07.534 ' 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.534 --rc genhtml_branch_coverage=1 00:11:07.534 --rc genhtml_function_coverage=1 00:11:07.534 --rc genhtml_legend=1 00:11:07.534 --rc geninfo_all_blocks=1 00:11:07.534 --rc geninfo_unexecuted_blocks=1 00:11:07.534 00:11:07.534 ' 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.534 --rc genhtml_branch_coverage=1 00:11:07.534 --rc genhtml_function_coverage=1 00:11:07.534 --rc genhtml_legend=1 00:11:07.534 --rc geninfo_all_blocks=1 00:11:07.534 --rc geninfo_unexecuted_blocks=1 00:11:07.534 00:11:07.534 ' 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.534 --rc genhtml_branch_coverage=1 00:11:07.534 --rc genhtml_function_coverage=1 00:11:07.534 --rc genhtml_legend=1 00:11:07.534 --rc geninfo_all_blocks=1 00:11:07.534 --rc geninfo_unexecuted_blocks=1 00:11:07.534 00:11:07.534 ' 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.534 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:07.535 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.683 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:15.684 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:15.684 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:15.684 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:15.684 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:11:15.684 00:11:15.684 --- 10.0.0.2 ping statistics --- 00:11:15.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.684 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:11:15.684 00:11:15.684 --- 10.0.0.1 ping statistics --- 00:11:15.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.684 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.684 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=751869 00:11:15.685 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 751869 00:11:15.685 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:15.685 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 751869 ']' 00:11:15.685 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.685 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.685 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.685 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.685 12:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.685 [2024-11-29 12:55:17.459473] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:11:15.685 [2024-11-29 12:55:17.459541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.685 [2024-11-29 12:55:17.558832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.685 [2024-11-29 12:55:17.614546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.685 [2024-11-29 12:55:17.614603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.685 [2024-11-29 12:55:17.614611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.685 [2024-11-29 12:55:17.614619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.685 [2024-11-29 12:55:17.614625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.685 [2024-11-29 12:55:17.616709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.685 [2024-11-29 12:55:17.616872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.685 [2024-11-29 12:55:17.617035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.685 [2024-11-29 12:55:17.617036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.685 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.947 [2024-11-29 12:55:18.416280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.947 Malloc0 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:15.947 [2024-11-29 12:55:18.481816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=752015 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=752018 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:15.947 { 00:11:15.947 "params": { 00:11:15.947 "name": "Nvme$subsystem", 00:11:15.947 "trtype": "$TEST_TRANSPORT", 00:11:15.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.947 "adrfam": "ipv4", 00:11:15.947 "trsvcid": "$NVMF_PORT", 00:11:15.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.947 "hdgst": ${hdgst:-false}, 00:11:15.947 "ddgst": ${ddgst:-false} 00:11:15.947 }, 00:11:15.947 "method": "bdev_nvme_attach_controller" 00:11:15.947 } 00:11:15.947 EOF 00:11:15.947 )") 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=752021 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:15.947 { 00:11:15.947 "params": { 00:11:15.947 "name": "Nvme$subsystem", 00:11:15.947 "trtype": "$TEST_TRANSPORT", 00:11:15.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.947 "adrfam": "ipv4", 00:11:15.947 "trsvcid": "$NVMF_PORT", 00:11:15.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.947 "hdgst": ${hdgst:-false}, 00:11:15.947 "ddgst": ${ddgst:-false} 00:11:15.947 }, 00:11:15.947 "method": "bdev_nvme_attach_controller" 00:11:15.947 } 00:11:15.947 EOF 00:11:15.947 )") 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=752024 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:15.947 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:15.948 { 00:11:15.948 "params": { 00:11:15.948 "name": "Nvme$subsystem", 00:11:15.948 "trtype": "$TEST_TRANSPORT", 00:11:15.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.948 "adrfam": "ipv4", 00:11:15.948 "trsvcid": "$NVMF_PORT", 00:11:15.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.948 "hdgst": ${hdgst:-false}, 00:11:15.948 "ddgst": ${ddgst:-false} 00:11:15.948 }, 00:11:15.948 "method": "bdev_nvme_attach_controller" 00:11:15.948 } 00:11:15.948 EOF 00:11:15.948 )") 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:15.948 { 00:11:15.948 "params": { 00:11:15.948 "name": "Nvme$subsystem", 00:11:15.948 "trtype": "$TEST_TRANSPORT", 00:11:15.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.948 "adrfam": "ipv4", 00:11:15.948 "trsvcid": "$NVMF_PORT", 00:11:15.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.948 "hdgst": ${hdgst:-false}, 00:11:15.948 "ddgst": ${ddgst:-false} 00:11:15.948 }, 00:11:15.948 "method": "bdev_nvme_attach_controller" 00:11:15.948 } 00:11:15.948 EOF 00:11:15.948 )") 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 752015 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:15.948 "params": { 00:11:15.948 "name": "Nvme1", 00:11:15.948 "trtype": "tcp", 00:11:15.948 "traddr": "10.0.0.2", 00:11:15.948 "adrfam": "ipv4", 00:11:15.948 "trsvcid": "4420", 00:11:15.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.948 "hdgst": false, 00:11:15.948 "ddgst": false 00:11:15.948 }, 00:11:15.948 "method": "bdev_nvme_attach_controller" 00:11:15.948 }' 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:15.948 "params": { 00:11:15.948 "name": "Nvme1", 00:11:15.948 "trtype": "tcp", 00:11:15.948 "traddr": "10.0.0.2", 00:11:15.948 "adrfam": "ipv4", 00:11:15.948 "trsvcid": "4420", 00:11:15.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.948 "hdgst": false, 00:11:15.948 "ddgst": false 00:11:15.948 }, 00:11:15.948 "method": "bdev_nvme_attach_controller" 00:11:15.948 }' 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:15.948 "params": { 00:11:15.948 "name": "Nvme1", 00:11:15.948 "trtype": "tcp", 00:11:15.948 "traddr": "10.0.0.2", 00:11:15.948 "adrfam": "ipv4", 00:11:15.948 "trsvcid": "4420", 00:11:15.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.948 "hdgst": false, 00:11:15.948 "ddgst": false 00:11:15.948 }, 00:11:15.948 "method": "bdev_nvme_attach_controller" 00:11:15.948 }' 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:15.948 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:15.948 "params": { 00:11:15.948 "name": "Nvme1", 00:11:15.948 "trtype": "tcp", 00:11:15.948 "traddr": "10.0.0.2", 00:11:15.948 "adrfam": "ipv4", 00:11:15.948 "trsvcid": "4420", 00:11:15.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.948 "hdgst": false, 00:11:15.948 "ddgst": false 00:11:15.948 }, 00:11:15.948 "method": "bdev_nvme_attach_controller" 00:11:15.948 }' 00:11:15.948 [2024-11-29 12:55:18.539409] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:11:15.948 [2024-11-29 12:55:18.539482] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:15.948 [2024-11-29 12:55:18.541500] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:11:15.948 [2024-11-29 12:55:18.541559] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:15.948 [2024-11-29 12:55:18.542696] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:11:15.948 [2024-11-29 12:55:18.542757] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:15.948 [2024-11-29 12:55:18.543494] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:11:15.948 [2024-11-29 12:55:18.543556] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:16.209 [2024-11-29 12:55:18.747528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.209 [2024-11-29 12:55:18.785836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:16.209 [2024-11-29 12:55:18.794581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.209 [2024-11-29 12:55:18.829609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:16.209 [2024-11-29 12:55:18.862042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.470 [2024-11-29 12:55:18.901813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:16.470 [2024-11-29 12:55:18.951851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.470 [2024-11-29 12:55:18.994193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:16.470 Running I/O for 1 seconds... 00:11:16.731 Running I/O for 1 seconds... 00:11:16.731 Running I/O for 1 seconds... 00:11:16.731 Running I/O for 1 seconds... 00:11:17.673 7128.00 IOPS, 27.84 MiB/s 00:11:17.673 Latency(us) 00:11:17.673 [2024-11-29T11:55:20.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.673 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:17.673 Nvme1n1 : 1.02 7134.67 27.87 0.00 0.00 17773.55 7536.64 22500.69 00:11:17.673 [2024-11-29T11:55:20.353Z] =================================================================================================================== 00:11:17.673 [2024-11-29T11:55:20.353Z] Total : 7134.67 27.87 0.00 0.00 17773.55 7536.64 22500.69 00:11:17.673 171608.00 IOPS, 670.34 MiB/s 00:11:17.673 Latency(us) 00:11:17.673 [2024-11-29T11:55:20.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.673 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:17.673 Nvme1n1 : 1.00 171238.59 668.90 0.00 0.00 743.14 315.73 2143.57 00:11:17.673 [2024-11-29T11:55:20.353Z] =================================================================================================================== 00:11:17.673 [2024-11-29T11:55:20.353Z] Total : 171238.59 668.90 0.00 0.00 743.14 315.73 2143.57 00:11:17.673 7101.00 IOPS, 27.74 MiB/s 00:11:17.673 Latency(us) 00:11:17.673 [2024-11-29T11:55:20.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.673 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:17.673 Nvme1n1 : 1.01 7208.80 28.16 0.00 0.00 17704.08 5106.35 32112.64 00:11:17.673 [2024-11-29T11:55:20.353Z] =================================================================================================================== 00:11:17.673 [2024-11-29T11:55:20.353Z] Total : 7208.80 28.16 0.00 0.00 17704.08 5106.35 32112.64 00:11:17.673 10509.00 IOPS, 41.05 MiB/s 00:11:17.673 Latency(us) 00:11:17.673 [2024-11-29T11:55:20.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.673 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:17.673 Nvme1n1 : 1.01 10579.34 41.33 0.00 0.00 12055.47 4751.36 24139.09 00:11:17.673 [2024-11-29T11:55:20.353Z] =================================================================================================================== 00:11:17.673 [2024-11-29T11:55:20.353Z] Total : 10579.34 41.33 0.00 0.00 12055.47 4751.36 24139.09 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 752018 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 752021 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 752024 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.934 rmmod nvme_tcp 00:11:17.934 rmmod nvme_fabrics 00:11:17.934 rmmod nvme_keyring 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 751869 ']' 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 751869 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 751869 ']' 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 751869 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 751869 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 751869' 00:11:17.934 killing process with pid 751869 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 751869 00:11:17.934 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 751869 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.196 12:55:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.109 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:20.109 00:11:20.109 real 0m13.086s 00:11:20.109 user 0m19.819s 00:11:20.109 sys 0m7.457s 00:11:20.109 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.109 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:20.109 ************************************ 00:11:20.109 END TEST nvmf_bdev_io_wait 00:11:20.109 ************************************ 00:11:20.369 12:55:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:20.369 12:55:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.369 12:55:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.369 12:55:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:20.369 ************************************ 00:11:20.369 START TEST nvmf_queue_depth 00:11:20.369 ************************************ 00:11:20.369 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:20.369 * Looking for test storage... 00:11:20.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.369 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:20.369 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:11:20.369 12:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:20.369 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:20.370 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.631 --rc genhtml_branch_coverage=1 00:11:20.631 --rc genhtml_function_coverage=1 00:11:20.631 --rc genhtml_legend=1 00:11:20.631 --rc geninfo_all_blocks=1 00:11:20.631 --rc geninfo_unexecuted_blocks=1 00:11:20.631 00:11:20.631 ' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.631 --rc genhtml_branch_coverage=1 00:11:20.631 --rc genhtml_function_coverage=1 00:11:20.631 --rc genhtml_legend=1 00:11:20.631 --rc geninfo_all_blocks=1 00:11:20.631 --rc geninfo_unexecuted_blocks=1 00:11:20.631 00:11:20.631 ' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.631 --rc genhtml_branch_coverage=1 00:11:20.631 --rc genhtml_function_coverage=1 00:11:20.631 --rc genhtml_legend=1 00:11:20.631 --rc geninfo_all_blocks=1 00:11:20.631 --rc geninfo_unexecuted_blocks=1 00:11:20.631 00:11:20.631 ' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.631 --rc genhtml_branch_coverage=1 00:11:20.631 --rc genhtml_function_coverage=1 00:11:20.631 --rc genhtml_legend=1 00:11:20.631 --rc geninfo_all_blocks=1 00:11:20.631 --rc geninfo_unexecuted_blocks=1 00:11:20.631 00:11:20.631 ' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.631 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.632 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.632 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.632 12:55:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.803 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:28.804 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:28.804 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:28.804 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:28.804 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:11:28.804 00:11:28.804 --- 10.0.0.2 ping statistics --- 00:11:28.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.804 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:11:28.804 00:11:28.804 --- 10.0.0.1 ping statistics --- 00:11:28.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.804 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=756690 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 756690 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 756690 ']' 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 [2024-11-29 12:55:30.695749] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:11:28.804 [2024-11-29 12:55:30.695815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.804 [2024-11-29 12:55:30.771667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.804 [2024-11-29 12:55:30.817317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.804 [2024-11-29 12:55:30.817364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.804 [2024-11-29 12:55:30.817370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.804 [2024-11-29 12:55:30.817380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.804 [2024-11-29 12:55:30.817385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.804 [2024-11-29 12:55:30.818061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 [2024-11-29 12:55:30.983368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.804 12:55:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 Malloc0 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 [2024-11-29 12:55:31.043816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=756908 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 756908 /var/tmp/bdevperf.sock 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 756908 ']' 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:28.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.804 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 [2024-11-29 12:55:31.103109] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:11:28.804 [2024-11-29 12:55:31.103182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756908 ] 00:11:28.804 [2024-11-29 12:55:31.194658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.804 [2024-11-29 12:55:31.248045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.375 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.375 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:29.375 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:29.375 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.375 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.635 NVMe0n1 00:11:29.635 12:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.635 12:55:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:29.635 Running I/O for 10 seconds... 00:11:31.519 10410.00 IOPS, 40.66 MiB/s [2024-11-29T11:55:35.586Z] 10774.50 IOPS, 42.09 MiB/s [2024-11-29T11:55:36.527Z] 11053.67 IOPS, 43.18 MiB/s [2024-11-29T11:55:37.469Z] 11264.00 IOPS, 44.00 MiB/s [2024-11-29T11:55:38.411Z] 11512.80 IOPS, 44.97 MiB/s [2024-11-29T11:55:39.353Z] 11775.33 IOPS, 46.00 MiB/s [2024-11-29T11:55:40.295Z] 11972.57 IOPS, 46.77 MiB/s [2024-11-29T11:55:41.238Z] 12110.62 IOPS, 47.31 MiB/s [2024-11-29T11:55:42.194Z] 12224.44 IOPS, 47.75 MiB/s [2024-11-29T11:55:42.454Z] 12357.40 IOPS, 48.27 MiB/s 00:11:39.774 Latency(us) 00:11:39.774 [2024-11-29T11:55:42.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.774 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:39.774 Verification LBA range: start 0x0 length 0x4000 00:11:39.774 NVMe0n1 : 10.07 12377.86 48.35 0.00 0.00 82398.29 25449.81 61603.84 00:11:39.774 [2024-11-29T11:55:42.454Z] =================================================================================================================== 00:11:39.774 [2024-11-29T11:55:42.454Z] Total : 12377.86 48.35 0.00 0.00 82398.29 25449.81 61603.84 00:11:39.774 { 00:11:39.774 "results": [ 00:11:39.774 { 00:11:39.774 "job": "NVMe0n1", 00:11:39.774 "core_mask": "0x1", 00:11:39.774 "workload": "verify", 00:11:39.774 "status": "finished", 00:11:39.774 "verify_range": { 00:11:39.774 "start": 0, 00:11:39.774 "length": 16384 00:11:39.774 }, 00:11:39.774 "queue_depth": 1024, 00:11:39.774 "io_size": 4096, 00:11:39.774 "runtime": 10.068216, 00:11:39.774 "iops": 12377.863168609018, 00:11:39.774 "mibps": 48.351028002378975, 00:11:39.774 "io_failed": 0, 00:11:39.774 "io_timeout": 0, 00:11:39.774 "avg_latency_us": 82398.29339153555, 00:11:39.774 "min_latency_us": 25449.81333333333, 00:11:39.774 "max_latency_us": 61603.84 00:11:39.774 } 00:11:39.774 ], 00:11:39.774 "core_count": 1 00:11:39.774 } 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 756908 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 756908 ']' 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 756908 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756908 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756908' 00:11:39.774 killing process with pid 756908 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 756908 00:11:39.774 Received shutdown signal, test time was about 10.000000 seconds 00:11:39.774 00:11:39.774 Latency(us) 00:11:39.774 [2024-11-29T11:55:42.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.774 [2024-11-29T11:55:42.454Z] =================================================================================================================== 00:11:39.774 [2024-11-29T11:55:42.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 756908 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:39.774 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.775 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:39.775 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.775 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:39.775 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.775 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.775 rmmod nvme_tcp 00:11:40.036 rmmod nvme_fabrics 00:11:40.036 rmmod nvme_keyring 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 756690 ']' 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 756690 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 756690 ']' 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 756690 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756690 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756690' 00:11:40.036 killing process with pid 756690 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 756690 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 756690 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.036 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.583 00:11:42.583 real 0m21.925s 00:11:42.583 user 0m25.090s 00:11:42.583 sys 0m7.064s 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.583 ************************************ 00:11:42.583 END TEST nvmf_queue_depth 00:11:42.583 ************************************ 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:42.583 ************************************ 00:11:42.583 START TEST nvmf_target_multipath 00:11:42.583 ************************************ 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:42.583 * Looking for test storage... 00:11:42.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:42.583 12:55:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:42.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.583 --rc genhtml_branch_coverage=1 00:11:42.583 --rc genhtml_function_coverage=1 00:11:42.583 --rc genhtml_legend=1 00:11:42.583 --rc geninfo_all_blocks=1 00:11:42.583 --rc geninfo_unexecuted_blocks=1 00:11:42.583 00:11:42.583 ' 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:42.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.583 --rc genhtml_branch_coverage=1 00:11:42.583 --rc genhtml_function_coverage=1 00:11:42.583 --rc genhtml_legend=1 00:11:42.583 --rc geninfo_all_blocks=1 00:11:42.583 --rc geninfo_unexecuted_blocks=1 00:11:42.583 00:11:42.583 ' 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:42.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.583 --rc genhtml_branch_coverage=1 00:11:42.583 --rc genhtml_function_coverage=1 00:11:42.583 --rc genhtml_legend=1 00:11:42.583 --rc geninfo_all_blocks=1 00:11:42.583 --rc geninfo_unexecuted_blocks=1 00:11:42.583 00:11:42.583 ' 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:42.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.583 --rc genhtml_branch_coverage=1 00:11:42.583 --rc genhtml_function_coverage=1 00:11:42.583 --rc genhtml_legend=1 00:11:42.583 --rc geninfo_all_blocks=1 00:11:42.583 --rc geninfo_unexecuted_blocks=1 00:11:42.583 00:11:42.583 ' 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.583 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.584 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:50.730 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:50.730 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:50.730 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:50.730 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:50.731 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:11:50.731 00:11:50.731 --- 10.0.0.2 ping statistics --- 00:11:50.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.731 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:11:50.731 00:11:50.731 --- 10.0.0.1 ping statistics --- 00:11:50.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.731 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:50.731 only one NIC for nvmf test 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.731 rmmod nvme_tcp 00:11:50.731 rmmod nvme_fabrics 00:11:50.731 rmmod nvme_keyring 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.731 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.116 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:52.377 00:11:52.377 real 0m9.941s 00:11:52.377 user 0m2.176s 00:11:52.377 sys 0m5.706s 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:52.377 ************************************ 00:11:52.377 END TEST nvmf_target_multipath 00:11:52.377 ************************************ 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:52.377 ************************************ 00:11:52.377 START TEST nvmf_zcopy 00:11:52.377 ************************************ 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:52.377 * Looking for test storage... 00:11:52.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.377 12:55:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.639 --rc genhtml_branch_coverage=1 00:11:52.639 --rc genhtml_function_coverage=1 00:11:52.639 --rc genhtml_legend=1 00:11:52.639 --rc geninfo_all_blocks=1 00:11:52.639 --rc geninfo_unexecuted_blocks=1 00:11:52.639 00:11:52.639 ' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.639 --rc genhtml_branch_coverage=1 00:11:52.639 --rc genhtml_function_coverage=1 00:11:52.639 --rc genhtml_legend=1 00:11:52.639 --rc geninfo_all_blocks=1 00:11:52.639 --rc geninfo_unexecuted_blocks=1 00:11:52.639 00:11:52.639 ' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.639 --rc genhtml_branch_coverage=1 00:11:52.639 --rc genhtml_function_coverage=1 00:11:52.639 --rc genhtml_legend=1 00:11:52.639 --rc geninfo_all_blocks=1 00:11:52.639 --rc geninfo_unexecuted_blocks=1 00:11:52.639 00:11:52.639 ' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.639 --rc genhtml_branch_coverage=1 00:11:52.639 --rc genhtml_function_coverage=1 00:11:52.639 --rc genhtml_legend=1 00:11:52.639 --rc geninfo_all_blocks=1 00:11:52.639 --rc geninfo_unexecuted_blocks=1 00:11:52.639 00:11:52.639 ' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.639 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.934 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:00.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:00.935 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:00.935 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:00.935 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:12:00.935 00:12:00.935 --- 10.0.0.2 ping statistics --- 00:12:00.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.935 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:12:00.935 00:12:00.935 --- 10.0.0.1 ping statistics --- 00:12:00.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.935 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=767660 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 767660 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 767660 ']' 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.935 12:56:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:00.935 [2024-11-29 12:56:02.727867] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:12:00.935 [2024-11-29 12:56:02.727933] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.935 [2024-11-29 12:56:02.828144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.935 [2024-11-29 12:56:02.877549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.935 [2024-11-29 12:56:02.877600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.936 [2024-11-29 12:56:02.877608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.936 [2024-11-29 12:56:02.877615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.936 [2024-11-29 12:56:02.877621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.936 [2024-11-29 12:56:02.878376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:00.936 [2024-11-29 12:56:03.605776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.936 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.197 [2024-11-29 12:56:03.630093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.197 malloc0 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:01.197 { 00:12:01.197 "params": { 00:12:01.197 "name": "Nvme$subsystem", 00:12:01.197 "trtype": "$TEST_TRANSPORT", 00:12:01.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:01.197 "adrfam": "ipv4", 00:12:01.197 "trsvcid": "$NVMF_PORT", 00:12:01.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:01.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:01.197 "hdgst": ${hdgst:-false}, 00:12:01.197 "ddgst": ${ddgst:-false} 00:12:01.197 }, 00:12:01.197 "method": "bdev_nvme_attach_controller" 00:12:01.197 } 00:12:01.197 EOF 00:12:01.197 )") 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:01.197 12:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:01.197 "params": { 00:12:01.197 "name": "Nvme1", 00:12:01.197 "trtype": "tcp", 00:12:01.197 "traddr": "10.0.0.2", 00:12:01.197 "adrfam": "ipv4", 00:12:01.197 "trsvcid": "4420", 00:12:01.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:01.197 "hdgst": false, 00:12:01.197 "ddgst": false 00:12:01.197 }, 00:12:01.197 "method": "bdev_nvme_attach_controller" 00:12:01.197 }' 00:12:01.197 [2024-11-29 12:56:03.731223] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:12:01.197 [2024-11-29 12:56:03.731289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767719 ] 00:12:01.197 [2024-11-29 12:56:03.821356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.197 [2024-11-29 12:56:03.874730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.810 Running I/O for 10 seconds... 00:12:03.693 6450.00 IOPS, 50.39 MiB/s [2024-11-29T11:56:07.313Z] 6718.00 IOPS, 52.48 MiB/s [2024-11-29T11:56:08.257Z] 7714.33 IOPS, 60.27 MiB/s [2024-11-29T11:56:09.199Z] 8216.50 IOPS, 64.19 MiB/s [2024-11-29T11:56:10.581Z] 8521.00 IOPS, 66.57 MiB/s [2024-11-29T11:56:11.522Z] 8712.00 IOPS, 68.06 MiB/s [2024-11-29T11:56:12.465Z] 8855.00 IOPS, 69.18 MiB/s [2024-11-29T11:56:13.408Z] 8967.75 IOPS, 70.06 MiB/s [2024-11-29T11:56:14.351Z] 9052.00 IOPS, 70.72 MiB/s [2024-11-29T11:56:14.351Z] 9119.80 IOPS, 71.25 MiB/s 00:12:11.671 Latency(us) 00:12:11.671 [2024-11-29T11:56:14.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.671 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:11.671 Verification LBA range: start 0x0 length 0x1000 00:12:11.671 Nvme1n1 : 10.01 9123.15 71.27 0.00 0.00 13984.17 2348.37 28617.39 00:12:11.671 [2024-11-29T11:56:14.351Z] =================================================================================================================== 00:12:11.671 [2024-11-29T11:56:14.351Z] Total : 9123.15 71.27 0.00 0.00 13984.17 2348.37 28617.39 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=769892 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:11.671 { 00:12:11.671 "params": { 00:12:11.671 "name": "Nvme$subsystem", 00:12:11.671 "trtype": "$TEST_TRANSPORT", 00:12:11.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:11.671 "adrfam": "ipv4", 00:12:11.671 "trsvcid": "$NVMF_PORT", 00:12:11.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:11.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:11.671 "hdgst": ${hdgst:-false}, 00:12:11.671 "ddgst": ${ddgst:-false} 00:12:11.671 }, 00:12:11.671 "method": "bdev_nvme_attach_controller" 00:12:11.671 } 00:12:11.671 EOF 00:12:11.671 )") 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:11.671 [2024-11-29 12:56:14.312708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.671 [2024-11-29 12:56:14.312740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:11.671 12:56:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:11.671 "params": { 00:12:11.671 "name": "Nvme1", 00:12:11.671 "trtype": "tcp", 00:12:11.671 "traddr": "10.0.0.2", 00:12:11.671 "adrfam": "ipv4", 00:12:11.671 "trsvcid": "4420", 00:12:11.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:11.671 "hdgst": false, 00:12:11.671 "ddgst": false 00:12:11.671 }, 00:12:11.671 "method": "bdev_nvme_attach_controller" 00:12:11.671 }' 00:12:11.671 [2024-11-29 12:56:14.324702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.671 [2024-11-29 12:56:14.324711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.671 [2024-11-29 12:56:14.336731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.671 [2024-11-29 12:56:14.336739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.671 [2024-11-29 12:56:14.348760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.671 [2024-11-29 12:56:14.348768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.354242] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:12:11.933 [2024-11-29 12:56:14.354290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769892 ] 00:12:11.933 [2024-11-29 12:56:14.360791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.360799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.372821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.372829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.384853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.384860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.396884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.396892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.408914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.408921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.420944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.420952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.432977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.432989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.436357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.933 [2024-11-29 12:56:14.445009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.445020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.457035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.457045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.466026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.933 [2024-11-29 12:56:14.469065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.469074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.481102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.481114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.493131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.493144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.505166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.505176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.517197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.517206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.529224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.529232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.541270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.541288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.553289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.553300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.565319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.565329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.577349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.577358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.589381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.589387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.933 [2024-11-29 12:56:14.601413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.933 [2024-11-29 12:56:14.601420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.613446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.613454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.625477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.625485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.637508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.637515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.649540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.649552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.661574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.661582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.673601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.673608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.685634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.685641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.697666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.697673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.709699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.709706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.721735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.721750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 Running I/O for 5 seconds... 00:12:12.194 [2024-11-29 12:56:14.733761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.733768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.749410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.749426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.762819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.762834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.775219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.775234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.788100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.788115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.801474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.801489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.814136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.814151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.827153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.827172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.839740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.839755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.852651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.852665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.194 [2024-11-29 12:56:14.866229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.194 [2024-11-29 12:56:14.866244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.879462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.879478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.892858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.892877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.905970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.905986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.919149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.919166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.932273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.932287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.945483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.945498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.959118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.959133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.972044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.972058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.985262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.985277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:14.998375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:14.998390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:15.011756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:15.011771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:15.025192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:15.025207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:15.037612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:15.037627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:15.050486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:15.050501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.455 [2024-11-29 12:56:15.063816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.455 [2024-11-29 12:56:15.063831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.456 [2024-11-29 12:56:15.077027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.456 [2024-11-29 12:56:15.077042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.456 [2024-11-29 12:56:15.090713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.456 [2024-11-29 12:56:15.090728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.456 [2024-11-29 12:56:15.103837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.456 [2024-11-29 12:56:15.103851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.456 [2024-11-29 12:56:15.117326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.456 [2024-11-29 12:56:15.117341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.456 [2024-11-29 12:56:15.130587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.456 [2024-11-29 12:56:15.130602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.716 [2024-11-29 12:56:15.143759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.716 [2024-11-29 12:56:15.143774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.716 [2024-11-29 12:56:15.157020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.716 [2024-11-29 12:56:15.157034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.716 [2024-11-29 12:56:15.170600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.716 [2024-11-29 12:56:15.170616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.716 [2024-11-29 12:56:15.183485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.716 [2024-11-29 12:56:15.183500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.197409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.197425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.209821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.209836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.222980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.222995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.236447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.236462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.249419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.249435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.262581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.262596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.276038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.276052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.289013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.289028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.302413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.302428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.315501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.315516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.328827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.328842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.341957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.341972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.355323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.355338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.368863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.368878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.382292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.382307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.717 [2024-11-29 12:56:15.395402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.717 [2024-11-29 12:56:15.395416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.408755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.408770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.422253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.422268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.434749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.434764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.448097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.448112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.461376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.461390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.474401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.474416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.487600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.487615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.501127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.501142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.513921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.513935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.526840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.526855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.540115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.540130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.553530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.553544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.566555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.566570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.579527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.579542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.592947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.592962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.605409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.605424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.618677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.618692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.631355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.631372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.977 [2024-11-29 12:56:15.644788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:12.977 [2024-11-29 12:56:15.644804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.658155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.658175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.671572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.671587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.684819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.684835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.698480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.698495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.710940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.710955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.723340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.723355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 18990.00 IOPS, 148.36 MiB/s [2024-11-29T11:56:15.919Z] [2024-11-29 12:56:15.735758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.735773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.748557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.748572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.761910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.761925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.774945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.774959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.788450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.788465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.801799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.801814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.814422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.814436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.827340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.827355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.840784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.840800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.854412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.854427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.867335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.867350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.881095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.881114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.894028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.894043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.239 [2024-11-29 12:56:15.906674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.239 [2024-11-29 12:56:15.906689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:15.920257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:15.920272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:15.933170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:15.933185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:15.946799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:15.946814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:15.959882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:15.959897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:15.972975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:15.972990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:15.986069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:15.986083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:15.999304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:15.999319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.012589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.012603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.025623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.025638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.038903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.038918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.051853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.051868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.065139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.065153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.078121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.078136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.091373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.091387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.104039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.104054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.116637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.116652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.129791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.129811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.142581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.142597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.156102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.156117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.500 [2024-11-29 12:56:16.169062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.500 [2024-11-29 12:56:16.169077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.182179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.182194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.195483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.195498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.208658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.208673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.221871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.221886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.234800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.234814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.247798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.247813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.261596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.261611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.274727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.274742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.287330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.287344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.300826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.300841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.314473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.314487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.328024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.328038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.341223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.341237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.354668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.354682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.367560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.367575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.380701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.380719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.393255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.393270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.406569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.406584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.419652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.419667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.761 [2024-11-29 12:56:16.432481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:13.761 [2024-11-29 12:56:16.432496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.445885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.445899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.459284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.459299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.472520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.472535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.485943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.485959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.499269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.499284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.512230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.512245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.525644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.525660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.539063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.539078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.552323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.552337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.565884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.565898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.578478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.578492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.591122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.591136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.604072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.604086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.617735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.617750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.630948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.630966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.644083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.644098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.657483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.657497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.670731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.670746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.683892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.683907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.023 [2024-11-29 12:56:16.696917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.023 [2024-11-29 12:56:16.696932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.710389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.710404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.723712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.723727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 19086.00 IOPS, 149.11 MiB/s [2024-11-29T11:56:16.963Z] [2024-11-29 12:56:16.737117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.737133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.750597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.750613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.763219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.763233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.776704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.776718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.789766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.789781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.803257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.803272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.816356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.816370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.829881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.829895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.843256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.843271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.856773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.856788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.869487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.869501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.881979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.881993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.895155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.895173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.907709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.907723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.921409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.921424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.934447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.934462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.947737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.947752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.283 [2024-11-29 12:56:16.961155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.283 [2024-11-29 12:56:16.961174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:16.974483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:16.974498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:16.987678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:16.987693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.000937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.000953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.014063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.014078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.027182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.027197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.040744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.040759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.053454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.053468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.066665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.066679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.079888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.079903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.093521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.093536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.106115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.106130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.118946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.118962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.131912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.131927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.144900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.144915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.158165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.158180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.171661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.171675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.184772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.184787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.198215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.198229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.543 [2024-11-29 12:56:17.211357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.543 [2024-11-29 12:56:17.211372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.224314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.224329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.237981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.237996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.251258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.251273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.264472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.264487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.277645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.277660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.290877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.290892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.304017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.304032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.317338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.317353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.330397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.330413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.343834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.343849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.356894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.356910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.370323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.803 [2024-11-29 12:56:17.370338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.803 [2024-11-29 12:56:17.383504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.804 [2024-11-29 12:56:17.383519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.804 [2024-11-29 12:56:17.397004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.804 [2024-11-29 12:56:17.397019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.804 [2024-11-29 12:56:17.410574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.804 [2024-11-29 12:56:17.410589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.804 [2024-11-29 12:56:17.423483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.804 [2024-11-29 12:56:17.423498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.804 [2024-11-29 12:56:17.436737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.804 [2024-11-29 12:56:17.436752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.804 [2024-11-29 12:56:17.450248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.804 [2024-11-29 12:56:17.450263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.804 [2024-11-29 12:56:17.463536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.804 [2024-11-29 12:56:17.463551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.804 [2024-11-29 12:56:17.476454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.804 [2024-11-29 12:56:17.476469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.064 [2024-11-29 12:56:17.489561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.064 [2024-11-29 12:56:17.489577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.064 [2024-11-29 12:56:17.502675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.064 [2024-11-29 12:56:17.502689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.515003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.515018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.528267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.528282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.541408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.541423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.554684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.554699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.567322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.567336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.580819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.580834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.593935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.593950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.606891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.606906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.620186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.620205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.632870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.632884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.646120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.646135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.658560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.658575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.671592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.671608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.685255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.685270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.697997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.698012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.711071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.711086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 [2024-11-29 12:56:17.724101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.724116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.065 19116.00 IOPS, 149.34 MiB/s [2024-11-29T11:56:17.745Z] [2024-11-29 12:56:17.737601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.065 [2024-11-29 12:56:17.737616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.750922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.750937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.764647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.764662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.777580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.777596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.790936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.790952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.804012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.804027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.817464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.817479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.830480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.830496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.843756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.843770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.857081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.857097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.870084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.870103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.883368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.883383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.895688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.895703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.908003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.908018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.921256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.921271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.934382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.934397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.947631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.947646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.960746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.960761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.973801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.973817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:17.987097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:17.987112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.327 [2024-11-29 12:56:18.000259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.327 [2024-11-29 12:56:18.000274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.013482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.013497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.026907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.026922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.040046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.040061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.053340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.053354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.066397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.066411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.079625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.079640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.093229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.093244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.105826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.105841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.118342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.118361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.131317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.131332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.144501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.144516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.157650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.157665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.170951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.170966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.184280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.184294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.197240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.197254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.210617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.210632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.223956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.223971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.236953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.236969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.250337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.250352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.588 [2024-11-29 12:56:18.262746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.588 [2024-11-29 12:56:18.262760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.275814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.275829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.289246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.289261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.302833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.302848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.315086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.315100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.328396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.328410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.342007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.342021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.354953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.354968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.367929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.367944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.380871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.380886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.393809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.393824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.407087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.407102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.420212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.420227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.433010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.433024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.445607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.445621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.458042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.458057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.471488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.471502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.484244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.484258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.496655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.496669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.509724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.509739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.849 [2024-11-29 12:56:18.522801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.849 [2024-11-29 12:56:18.522816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.109 [2024-11-29 12:56:18.536262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.109 [2024-11-29 12:56:18.536277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.109 [2024-11-29 12:56:18.548836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.109 [2024-11-29 12:56:18.548851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.109 [2024-11-29 12:56:18.562141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.109 [2024-11-29 12:56:18.562156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.109 [2024-11-29 12:56:18.574808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.109 [2024-11-29 12:56:18.574823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.587974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.587988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.600859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.600874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.613944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.613959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.627286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.627300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.640462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.640477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.653743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.653758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.667378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.667393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.679802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.679817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.693144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.693163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.706279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.706294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.719895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.719909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.732308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.732323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 19140.00 IOPS, 149.53 MiB/s [2024-11-29T11:56:18.790Z] [2024-11-29 12:56:18.744638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.744653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.756857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.756872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.770024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.770039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.110 [2024-11-29 12:56:18.782917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.110 [2024-11-29 12:56:18.782932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.795982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.795997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.808932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.808946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.822418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.822433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.835513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.835528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.849226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.849244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.861271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.861286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.874428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.874443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.887567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.887581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.900509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.900524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.913753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.913767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.927469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.927484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.940993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.941008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.954164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.954179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.966739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.966753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.979347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.979361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:18.992382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:18.992397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:19.004727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:19.004742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:19.018131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:19.018146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:19.030787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:19.030802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.370 [2024-11-29 12:56:19.043796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.370 [2024-11-29 12:56:19.043812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.630 [2024-11-29 12:56:19.056994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.630 [2024-11-29 12:56:19.057009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.630 [2024-11-29 12:56:19.070094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.630 [2024-11-29 12:56:19.070110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.630 [2024-11-29 12:56:19.083437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.630 [2024-11-29 12:56:19.083452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.630 [2024-11-29 12:56:19.096734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.630 [2024-11-29 12:56:19.096757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.630 [2024-11-29 12:56:19.109862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.630 [2024-11-29 12:56:19.109878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.630 [2024-11-29 12:56:19.123224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.630 [2024-11-29 12:56:19.123239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.630 [2024-11-29 12:56:19.136833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.630 [2024-11-29 12:56:19.136848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.630 [2024-11-29 12:56:19.150170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.630 [2024-11-29 12:56:19.150185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.630 [2024-11-29 12:56:19.162955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.630 [2024-11-29 12:56:19.162970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.176262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.176277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.188634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.188649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.201572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.201587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.214799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.214814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.227996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.228011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.241116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.241131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.254556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.254572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.267799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.267814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.280905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.280920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.293926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.293941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.631 [2024-11-29 12:56:19.307082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.631 [2024-11-29 12:56:19.307097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.320086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.320101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.333031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.333046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.346436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.346454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.359972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.359987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.372513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.372527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.385839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.385854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.399515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.399529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.412556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.412571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.425387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.425402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.438595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.438610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.451688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.451704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.465130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.465146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.478743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.478758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.492174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.492189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.504772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.891 [2024-11-29 12:56:19.504787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.891 [2024-11-29 12:56:19.517824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.892 [2024-11-29 12:56:19.517839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.892 [2024-11-29 12:56:19.530842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.892 [2024-11-29 12:56:19.530857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.892 [2024-11-29 12:56:19.544435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.892 [2024-11-29 12:56:19.544450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.892 [2024-11-29 12:56:19.557538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.892 [2024-11-29 12:56:19.557553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.892 [2024-11-29 12:56:19.570921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.892 [2024-11-29 12:56:19.570936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.583610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.583625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.596938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.596957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.610075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.610090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.623215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.623230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.636818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.636833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.649811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.649826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.663384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.663399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.675881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.675897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.688915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.688930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.701942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.701956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.715297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.715312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.727828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.727843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.741024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.741039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 19148.80 IOPS, 149.60 MiB/s 00:12:17.152 Latency(us) 00:12:17.152 [2024-11-29T11:56:19.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.152 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:17.152 Nvme1n1 : 5.01 19150.85 149.62 0.00 0.00 6678.14 3112.96 15947.09 00:12:17.152 [2024-11-29T11:56:19.832Z] =================================================================================================================== 00:12:17.152 [2024-11-29T11:56:19.832Z] Total : 19150.85 149.62 0.00 0.00 6678.14 3112.96 15947.09 00:12:17.152 [2024-11-29 12:56:19.750884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.750897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.762924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.762937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.774945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.774957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.786976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.786989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.799005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.799016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.811031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.811041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.152 [2024-11-29 12:56:19.823063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.152 [2024-11-29 12:56:19.823071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.413 [2024-11-29 12:56:19.835098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.413 [2024-11-29 12:56:19.835108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.413 [2024-11-29 12:56:19.847126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.413 [2024-11-29 12:56:19.847133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (769892) - No such process 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 769892 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:17.413 delay0 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.413 12:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:17.413 [2024-11-29 12:56:20.015690] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:25.543 Initializing NVMe Controllers 00:12:25.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:25.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:25.543 Initialization complete. Launching workers. 00:12:25.543 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 231, failed: 32624 00:12:25.543 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32738, failed to submit 117 00:12:25.543 success 32656, unsuccessful 82, failed 0 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:25.543 rmmod nvme_tcp 00:12:25.543 rmmod nvme_fabrics 00:12:25.543 rmmod nvme_keyring 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 767660 ']' 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 767660 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 767660 ']' 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 767660 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 767660 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 767660' 00:12:25.543 killing process with pid 767660 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 767660 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 767660 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.543 12:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.925 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:26.925 00:12:26.925 real 0m34.656s 00:12:26.925 user 0m45.604s 00:12:26.925 sys 0m12.037s 00:12:26.925 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.925 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:26.925 ************************************ 00:12:26.925 END TEST nvmf_zcopy 00:12:26.925 ************************************ 00:12:26.925 12:56:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:26.925 12:56:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.925 12:56:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.925 12:56:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:27.186 ************************************ 00:12:27.186 START TEST nvmf_nmic 00:12:27.186 ************************************ 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:27.186 * Looking for test storage... 00:12:27.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:27.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.186 --rc genhtml_branch_coverage=1 00:12:27.186 --rc genhtml_function_coverage=1 00:12:27.186 --rc genhtml_legend=1 00:12:27.186 --rc geninfo_all_blocks=1 00:12:27.186 --rc geninfo_unexecuted_blocks=1 00:12:27.186 00:12:27.186 ' 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:27.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.186 --rc genhtml_branch_coverage=1 00:12:27.186 --rc genhtml_function_coverage=1 00:12:27.186 --rc genhtml_legend=1 00:12:27.186 --rc geninfo_all_blocks=1 00:12:27.186 --rc geninfo_unexecuted_blocks=1 00:12:27.186 00:12:27.186 ' 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:27.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.186 --rc genhtml_branch_coverage=1 00:12:27.186 --rc genhtml_function_coverage=1 00:12:27.186 --rc genhtml_legend=1 00:12:27.186 --rc geninfo_all_blocks=1 00:12:27.186 --rc geninfo_unexecuted_blocks=1 00:12:27.186 00:12:27.186 ' 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:27.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.186 --rc genhtml_branch_coverage=1 00:12:27.186 --rc genhtml_function_coverage=1 00:12:27.186 --rc genhtml_legend=1 00:12:27.186 --rc geninfo_all_blocks=1 00:12:27.186 --rc geninfo_unexecuted_blocks=1 00:12:27.186 00:12:27.186 ' 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.186 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.187 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.447 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:27.447 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:27.447 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.447 12:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.582 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:35.583 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:35.583 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:35.583 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:35.583 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.583 12:56:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.583 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.583 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.583 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.583 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.583 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.583 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:12:35.584 00:12:35.584 --- 10.0.0.2 ping statistics --- 00:12:35.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.584 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:12:35.584 00:12:35.584 --- 10.0.0.1 ping statistics --- 00:12:35.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.584 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=776726 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 776726 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 776726 ']' 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.584 12:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 [2024-11-29 12:56:37.315546] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:12:35.584 [2024-11-29 12:56:37.315599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.584 [2024-11-29 12:56:37.411409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.584 [2024-11-29 12:56:37.455708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.584 [2024-11-29 12:56:37.455752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.584 [2024-11-29 12:56:37.455761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.584 [2024-11-29 12:56:37.455768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.584 [2024-11-29 12:56:37.455774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.584 [2024-11-29 12:56:37.457577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.584 [2024-11-29 12:56:37.457735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.584 [2024-11-29 12:56:37.457893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.584 [2024-11-29 12:56:37.457893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 [2024-11-29 12:56:38.158007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 Malloc0 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 [2024-11-29 12:56:38.226644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:35.584 test case1: single bdev can't be used in multiple subsystems 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.584 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.844 [2024-11-29 12:56:38.262562] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:35.844 [2024-11-29 12:56:38.262583] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:35.844 [2024-11-29 12:56:38.262591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:35.844 request: 00:12:35.844 { 00:12:35.844 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:35.844 "namespace": { 00:12:35.844 "bdev_name": "Malloc0", 00:12:35.844 "no_auto_visible": false, 00:12:35.844 "hide_metadata": false 00:12:35.844 }, 00:12:35.844 "method": "nvmf_subsystem_add_ns", 00:12:35.844 "req_id": 1 00:12:35.844 } 00:12:35.844 Got JSON-RPC error response 00:12:35.844 response: 00:12:35.844 { 00:12:35.844 "code": -32602, 00:12:35.844 "message": "Invalid parameters" 00:12:35.844 } 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:35.844 Adding namespace failed - expected result. 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:35.844 test case2: host connect to nvmf target in multiple paths 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:35.844 [2024-11-29 12:56:38.274707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.844 12:56:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.229 12:56:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:39.140 12:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.140 12:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:39.140 12:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.140 12:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:39.140 12:56:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:41.054 12:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:41.054 12:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:41.054 12:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.054 12:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:41.054 12:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.054 12:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:41.054 12:56:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:41.054 [global] 00:12:41.054 thread=1 00:12:41.054 invalidate=1 00:12:41.054 rw=write 00:12:41.054 time_based=1 00:12:41.054 runtime=1 00:12:41.054 ioengine=libaio 00:12:41.054 direct=1 00:12:41.054 bs=4096 00:12:41.054 iodepth=1 00:12:41.054 norandommap=0 00:12:41.054 numjobs=1 00:12:41.054 00:12:41.054 verify_dump=1 00:12:41.054 verify_backlog=512 00:12:41.054 verify_state_save=0 00:12:41.054 do_verify=1 00:12:41.054 verify=crc32c-intel 00:12:41.054 [job0] 00:12:41.054 filename=/dev/nvme0n1 00:12:41.054 Could not set queue depth (nvme0n1) 00:12:41.316 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:41.316 fio-3.35 00:12:41.316 Starting 1 thread 00:12:42.438 00:12:42.438 job0: (groupid=0, jobs=1): err= 0: pid=778237: Fri Nov 29 12:56:44 2024 00:12:42.438 read: IOPS=15, BW=63.1KiB/s (64.6kB/s)(64.0KiB/1014msec) 00:12:42.438 slat (nsec): min=26210, max=32982, avg=27055.81, stdev=1638.49 00:12:42.438 clat (usec): min=41021, max=42125, avg=41855.10, stdev=301.19 00:12:42.438 lat (usec): min=41048, max=42151, avg=41882.16, stdev=301.25 00:12:42.438 clat percentiles (usec): 00:12:42.438 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:12:42.438 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:12:42.438 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:42.438 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:42.438 | 99.99th=[42206] 00:12:42.438 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:12:42.438 slat (usec): min=10, max=24499, avg=78.17, stdev=1081.46 00:12:42.438 clat (usec): min=161, max=884, avg=586.05, stdev=97.10 00:12:42.438 lat (usec): min=177, max=25181, avg=664.22, stdev=1090.37 00:12:42.438 clat percentiles (usec): 00:12:42.438 | 1.00th=[ 338], 5.00th=[ 412], 10.00th=[ 469], 20.00th=[ 502], 00:12:42.438 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 611], 00:12:42.438 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:12:42.438 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 881], 99.95th=[ 881], 00:12:42.438 | 99.99th=[ 881] 00:12:42.438 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:42.438 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:42.438 lat (usec) : 250=0.19%, 500=18.56%, 750=75.38%, 1000=2.84% 00:12:42.438 lat (msec) : 50=3.03% 00:12:42.438 cpu : usr=0.39%, sys=1.88%, ctx=531, majf=0, minf=1 00:12:42.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:42.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.438 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:42.438 00:12:42.438 Run status group 0 (all jobs): 00:12:42.438 READ: bw=63.1KiB/s (64.6kB/s), 63.1KiB/s-63.1KiB/s (64.6kB/s-64.6kB/s), io=64.0KiB (65.5kB), run=1014-1014msec 00:12:42.438 WRITE: bw=2020KiB/s (2068kB/s), 2020KiB/s-2020KiB/s (2068kB/s-2068kB/s), io=2048KiB (2097kB), run=1014-1014msec 00:12:42.438 00:12:42.438 Disk stats (read/write): 00:12:42.438 nvme0n1: ios=39/512, merge=0/0, ticks=1530/293, in_queue=1823, util=98.70% 00:12:42.438 12:56:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.438 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.699 rmmod nvme_tcp 00:12:42.699 rmmod nvme_fabrics 00:12:42.699 rmmod nvme_keyring 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 776726 ']' 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 776726 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 776726 ']' 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 776726 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776726 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776726' 00:12:42.699 killing process with pid 776726 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 776726 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 776726 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.699 12:56:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.243 00:12:45.243 real 0m17.824s 00:12:45.243 user 0m47.589s 00:12:45.243 sys 0m6.540s 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:45.243 ************************************ 00:12:45.243 END TEST nvmf_nmic 00:12:45.243 ************************************ 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:45.243 ************************************ 00:12:45.243 START TEST nvmf_fio_target 00:12:45.243 ************************************ 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:45.243 * Looking for test storage... 00:12:45.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.243 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:45.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.244 --rc genhtml_branch_coverage=1 00:12:45.244 --rc genhtml_function_coverage=1 00:12:45.244 --rc genhtml_legend=1 00:12:45.244 --rc geninfo_all_blocks=1 00:12:45.244 --rc geninfo_unexecuted_blocks=1 00:12:45.244 00:12:45.244 ' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:45.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.244 --rc genhtml_branch_coverage=1 00:12:45.244 --rc genhtml_function_coverage=1 00:12:45.244 --rc genhtml_legend=1 00:12:45.244 --rc geninfo_all_blocks=1 00:12:45.244 --rc geninfo_unexecuted_blocks=1 00:12:45.244 00:12:45.244 ' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:45.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.244 --rc genhtml_branch_coverage=1 00:12:45.244 --rc genhtml_function_coverage=1 00:12:45.244 --rc genhtml_legend=1 00:12:45.244 --rc geninfo_all_blocks=1 00:12:45.244 --rc geninfo_unexecuted_blocks=1 00:12:45.244 00:12:45.244 ' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:45.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.244 --rc genhtml_branch_coverage=1 00:12:45.244 --rc genhtml_function_coverage=1 00:12:45.244 --rc genhtml_legend=1 00:12:45.244 --rc geninfo_all_blocks=1 00:12:45.244 --rc geninfo_unexecuted_blocks=1 00:12:45.244 00:12:45.244 ' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.244 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.245 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.245 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.245 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.245 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.245 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.245 12:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:53.382 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:53.382 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.382 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:53.383 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:53.383 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.383 12:56:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:53.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:12:53.383 00:12:53.383 --- 10.0.0.2 ping statistics --- 00:12:53.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.383 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:12:53.383 00:12:53.383 --- 10.0.0.1 ping statistics --- 00:12:53.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.383 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=782623 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 782623 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 782623 ']' 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.383 12:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.383 [2024-11-29 12:56:55.335201] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:12:53.383 [2024-11-29 12:56:55.335272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.383 [2024-11-29 12:56:55.437194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.383 [2024-11-29 12:56:55.490903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.383 [2024-11-29 12:56:55.490961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.383 [2024-11-29 12:56:55.490970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.383 [2024-11-29 12:56:55.490978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.383 [2024-11-29 12:56:55.490984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.383 [2024-11-29 12:56:55.493076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.383 [2024-11-29 12:56:55.493230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.383 [2024-11-29 12:56:55.493308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.383 [2024-11-29 12:56:55.493308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.644 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.644 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:53.644 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.644 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.644 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.644 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.644 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:53.904 [2024-11-29 12:56:56.364276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.904 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:54.165 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:54.165 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:54.425 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:54.425 12:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:54.425 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:54.425 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:54.686 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:54.686 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:54.947 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:55.208 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:55.208 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:55.469 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:55.469 12:56:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:55.469 12:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:55.469 12:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:55.729 12:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:55.989 12:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:55.989 12:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:56.250 12:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:56.250 12:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:56.250 12:56:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.510 [2024-11-29 12:56:59.012121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.511 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:56.772 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:56.772 12:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.687 12:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:58.687 12:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:58.687 12:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.687 12:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:58.687 12:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:58.687 12:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:13:00.623 12:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:00.623 12:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:00.623 12:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.623 12:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:13:00.623 12:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.623 12:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:13:00.623 12:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:00.623 [global] 00:13:00.623 thread=1 00:13:00.623 invalidate=1 00:13:00.623 rw=write 00:13:00.623 time_based=1 00:13:00.623 runtime=1 00:13:00.623 ioengine=libaio 00:13:00.623 direct=1 00:13:00.623 bs=4096 00:13:00.623 iodepth=1 00:13:00.623 norandommap=0 00:13:00.623 numjobs=1 00:13:00.623 00:13:00.623 verify_dump=1 00:13:00.623 verify_backlog=512 00:13:00.623 verify_state_save=0 00:13:00.623 do_verify=1 00:13:00.623 verify=crc32c-intel 00:13:00.623 [job0] 00:13:00.623 filename=/dev/nvme0n1 00:13:00.623 [job1] 00:13:00.623 filename=/dev/nvme0n2 00:13:00.623 [job2] 00:13:00.623 filename=/dev/nvme0n3 00:13:00.623 [job3] 00:13:00.623 filename=/dev/nvme0n4 00:13:00.623 Could not set queue depth (nvme0n1) 00:13:00.623 Could not set queue depth (nvme0n2) 00:13:00.624 Could not set queue depth (nvme0n3) 00:13:00.624 Could not set queue depth (nvme0n4) 00:13:00.885 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.885 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.885 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.885 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:00.885 fio-3.35 00:13:00.885 Starting 4 threads 00:13:02.335 00:13:02.335 job0: (groupid=0, jobs=1): err= 0: pid=784648: Fri Nov 29 12:57:04 2024 00:13:02.335 read: IOPS=528, BW=2114KiB/s (2165kB/s)(2116KiB/1001msec) 00:13:02.335 slat (nsec): min=6542, max=61896, avg=22924.38, stdev=7260.61 00:13:02.335 clat (usec): min=378, max=41006, avg=914.57, stdev=1756.33 00:13:02.335 lat (usec): min=398, max=41033, avg=937.49, stdev=1756.80 00:13:02.335 clat percentiles (usec): 00:13:02.335 | 1.00th=[ 437], 5.00th=[ 515], 10.00th=[ 586], 20.00th=[ 652], 00:13:02.335 | 30.00th=[ 725], 40.00th=[ 799], 50.00th=[ 848], 60.00th=[ 922], 00:13:02.336 | 70.00th=[ 988], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:13:02.336 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[41157], 99.95th=[41157], 00:13:02.336 | 99.99th=[41157] 00:13:02.336 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:02.336 slat (nsec): min=9155, max=65743, avg=22776.41, stdev=11818.82 00:13:02.336 clat (usec): min=122, max=846, avg=460.49, stdev=144.61 00:13:02.336 lat (usec): min=131, max=879, avg=483.27, stdev=151.96 00:13:02.336 clat percentiles (usec): 00:13:02.336 | 1.00th=[ 182], 5.00th=[ 260], 10.00th=[ 281], 20.00th=[ 314], 00:13:02.336 | 30.00th=[ 359], 40.00th=[ 404], 50.00th=[ 457], 60.00th=[ 502], 00:13:02.336 | 70.00th=[ 553], 80.00th=[ 603], 90.00th=[ 660], 95.00th=[ 693], 00:13:02.336 | 99.00th=[ 734], 99.50th=[ 775], 99.90th=[ 799], 99.95th=[ 848], 00:13:02.336 | 99.99th=[ 848] 00:13:02.336 bw ( KiB/s): min= 4096, max= 4096, per=31.50%, avg=4096.00, stdev= 0.00, samples=1 00:13:02.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:02.336 lat (usec) : 250=2.64%, 500=37.99%, 750=35.99%, 1000=14.36% 00:13:02.336 lat (msec) : 2=8.95%, 50=0.06% 00:13:02.336 cpu : usr=1.60%, sys=3.90%, ctx=1554, majf=0, minf=1 00:13:02.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.336 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.336 job1: (groupid=0, jobs=1): err= 0: pid=784650: Fri Nov 29 12:57:04 2024 00:13:02.336 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:02.336 slat (nsec): min=6746, max=61190, avg=23276.09, stdev=7721.92 00:13:02.336 clat (usec): min=128, max=753, avg=492.04, stdev=94.67 00:13:02.336 lat (usec): min=154, max=779, avg=515.32, stdev=95.94 00:13:02.336 clat percentiles (usec): 00:13:02.336 | 1.00th=[ 215], 5.00th=[ 297], 10.00th=[ 351], 20.00th=[ 429], 00:13:02.336 | 30.00th=[ 465], 40.00th=[ 490], 50.00th=[ 515], 60.00th=[ 537], 00:13:02.336 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 586], 95.00th=[ 611], 00:13:02.336 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 693], 99.95th=[ 750], 00:13:02.336 | 99.99th=[ 750] 00:13:02.336 write: IOPS=1331, BW=5327KiB/s (5455kB/s)(5332KiB/1001msec); 0 zone resets 00:13:02.336 slat (nsec): min=9790, max=70515, avg=25555.88, stdev=12294.46 00:13:02.336 clat (usec): min=92, max=753, avg=316.83, stdev=141.02 00:13:02.336 lat (usec): min=102, max=789, avg=342.38, stdev=146.84 00:13:02.336 clat percentiles (usec): 00:13:02.336 | 1.00th=[ 100], 5.00th=[ 110], 10.00th=[ 121], 20.00th=[ 212], 00:13:02.336 | 30.00th=[ 251], 40.00th=[ 277], 50.00th=[ 306], 60.00th=[ 338], 00:13:02.336 | 70.00th=[ 367], 80.00th=[ 408], 90.00th=[ 506], 95.00th=[ 611], 00:13:02.336 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 750], 99.95th=[ 750], 00:13:02.336 | 99.99th=[ 750] 00:13:02.336 bw ( KiB/s): min= 6664, max= 6664, per=51.25%, avg=6664.00, stdev= 0.00, samples=1 00:13:02.336 iops : min= 1666, max= 1666, avg=1666.00, stdev= 0.00, samples=1 00:13:02.336 lat (usec) : 100=0.68%, 250=17.01%, 500=52.06%, 750=30.12%, 1000=0.13% 00:13:02.336 cpu : usr=3.50%, sys=5.50%, ctx=2359, majf=0, minf=1 00:13:02.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.336 issued rwts: total=1024,1333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.336 job2: (groupid=0, jobs=1): err= 0: pid=784651: Fri Nov 29 12:57:04 2024 00:13:02.336 read: IOPS=18, BW=75.2KiB/s (77.1kB/s)(76.0KiB/1010msec) 00:13:02.336 slat (nsec): min=26549, max=31073, avg=27057.95, stdev=991.63 00:13:02.336 clat (usec): min=40754, max=41265, avg=40979.15, stdev=105.57 00:13:02.336 lat (usec): min=40781, max=41291, avg=41006.21, stdev=105.73 00:13:02.336 clat percentiles (usec): 00:13:02.336 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:02.336 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:02.336 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:02.336 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:02.336 | 99.99th=[41157] 00:13:02.336 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:13:02.336 slat (nsec): min=9679, max=68646, avg=29025.28, stdev=10295.64 00:13:02.336 clat (usec): min=120, max=617, avg=414.87, stdev=82.43 00:13:02.336 lat (usec): min=131, max=650, avg=443.89, stdev=87.18 00:13:02.336 clat percentiles (usec): 00:13:02.336 | 1.00th=[ 206], 5.00th=[ 273], 10.00th=[ 293], 20.00th=[ 343], 00:13:02.336 | 30.00th=[ 367], 40.00th=[ 392], 50.00th=[ 433], 60.00th=[ 453], 00:13:02.336 | 70.00th=[ 465], 80.00th=[ 490], 90.00th=[ 510], 95.00th=[ 537], 00:13:02.336 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 619], 99.95th=[ 619], 00:13:02.336 | 99.99th=[ 619] 00:13:02.336 bw ( KiB/s): min= 4096, max= 4096, per=31.50%, avg=4096.00, stdev= 0.00, samples=1 00:13:02.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:02.336 lat (usec) : 250=2.07%, 500=80.79%, 750=13.56% 00:13:02.336 lat (msec) : 50=3.58% 00:13:02.336 cpu : usr=0.40%, sys=1.68%, ctx=532, majf=0, minf=1 00:13:02.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.336 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.336 job3: (groupid=0, jobs=1): err= 0: pid=784653: Fri Nov 29 12:57:04 2024 00:13:02.336 read: IOPS=44, BW=177KiB/s (181kB/s)(184KiB/1040msec) 00:13:02.336 slat (nsec): min=8360, max=45697, avg=25992.28, stdev=6088.44 00:13:02.336 clat (usec): min=518, max=42033, avg=15764.85, stdev=19826.97 00:13:02.336 lat (usec): min=545, max=42060, avg=15790.84, stdev=19828.94 00:13:02.336 clat percentiles (usec): 00:13:02.336 | 1.00th=[ 519], 5.00th=[ 603], 10.00th=[ 611], 20.00th=[ 685], 00:13:02.336 | 30.00th=[ 783], 40.00th=[ 807], 50.00th=[ 848], 60.00th=[ 898], 00:13:02.336 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:13:02.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:02.336 | 99.99th=[42206] 00:13:02.336 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:13:02.336 slat (nsec): min=10213, max=58978, avg=31378.24, stdev=9683.50 00:13:02.336 clat (usec): min=187, max=2122, avg=572.67, stdev=157.59 00:13:02.336 lat (usec): min=199, max=2159, avg=604.04, stdev=160.55 00:13:02.336 clat percentiles (usec): 00:13:02.336 | 1.00th=[ 262], 5.00th=[ 306], 10.00th=[ 383], 20.00th=[ 482], 00:13:02.336 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 594], 00:13:02.336 | 70.00th=[ 644], 80.00th=[ 693], 90.00th=[ 758], 95.00th=[ 816], 00:13:02.336 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 2114], 99.95th=[ 2114], 00:13:02.336 | 99.99th=[ 2114] 00:13:02.336 bw ( KiB/s): min= 4096, max= 4096, per=31.50%, avg=4096.00, stdev= 0.00, samples=1 00:13:02.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:02.336 lat (usec) : 250=0.72%, 500=22.76%, 750=59.50%, 1000=13.80% 00:13:02.336 lat (msec) : 4=0.18%, 50=3.05% 00:13:02.336 cpu : usr=0.77%, sys=1.64%, ctx=560, majf=0, minf=1 00:13:02.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.336 issued rwts: total=46,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:02.336 00:13:02.336 Run status group 0 (all jobs): 00:13:02.336 READ: bw=6223KiB/s (6372kB/s), 75.2KiB/s-4092KiB/s (77.1kB/s-4190kB/s), io=6472KiB (6627kB), run=1001-1040msec 00:13:02.336 WRITE: bw=12.7MiB/s (13.3MB/s), 1969KiB/s-5327KiB/s (2016kB/s-5455kB/s), io=13.2MiB (13.8MB), run=1001-1040msec 00:13:02.336 00:13:02.336 Disk stats (read/write): 00:13:02.336 nvme0n1: ios=562/784, merge=0/0, ticks=521/313, in_queue=834, util=87.27% 00:13:02.336 nvme0n2: ios=1017/1024, merge=0/0, ticks=1429/266, in_queue=1695, util=96.83% 00:13:02.336 nvme0n3: ios=14/512, merge=0/0, ticks=574/216, in_queue=790, util=88.36% 00:13:02.336 nvme0n4: ios=70/512, merge=0/0, ticks=1275/280, in_queue=1555, util=96.89% 00:13:02.336 12:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:02.336 [global] 00:13:02.336 thread=1 00:13:02.336 invalidate=1 00:13:02.336 rw=randwrite 00:13:02.336 time_based=1 00:13:02.336 runtime=1 00:13:02.336 ioengine=libaio 00:13:02.336 direct=1 00:13:02.336 bs=4096 00:13:02.336 iodepth=1 00:13:02.336 norandommap=0 00:13:02.336 numjobs=1 00:13:02.336 00:13:02.336 verify_dump=1 00:13:02.336 verify_backlog=512 00:13:02.336 verify_state_save=0 00:13:02.336 do_verify=1 00:13:02.336 verify=crc32c-intel 00:13:02.336 [job0] 00:13:02.336 filename=/dev/nvme0n1 00:13:02.336 [job1] 00:13:02.336 filename=/dev/nvme0n2 00:13:02.336 [job2] 00:13:02.336 filename=/dev/nvme0n3 00:13:02.336 [job3] 00:13:02.336 filename=/dev/nvme0n4 00:13:02.336 Could not set queue depth (nvme0n1) 00:13:02.336 Could not set queue depth (nvme0n2) 00:13:02.336 Could not set queue depth (nvme0n3) 00:13:02.336 Could not set queue depth (nvme0n4) 00:13:02.601 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.601 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.601 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.601 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.601 fio-3.35 00:13:02.601 Starting 4 threads 00:13:04.010 00:13:04.010 job0: (groupid=0, jobs=1): err= 0: pid=785175: Fri Nov 29 12:57:06 2024 00:13:04.010 read: IOPS=16, BW=67.9KiB/s (69.6kB/s)(68.0KiB/1001msec) 00:13:04.010 slat (nsec): min=24678, max=25335, avg=24944.59, stdev=214.88 00:13:04.010 clat (usec): min=1079, max=42052, avg=39377.70, stdev=9876.42 00:13:04.010 lat (usec): min=1104, max=42077, avg=39402.65, stdev=9876.44 00:13:04.010 clat percentiles (usec): 00:13:04.010 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41157], 00:13:04.010 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:13:04.010 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:04.010 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:04.010 | 99.99th=[42206] 00:13:04.010 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:13:04.010 slat (nsec): min=9312, max=64069, avg=29171.13, stdev=7760.84 00:13:04.010 clat (usec): min=211, max=1000, avg=609.71, stdev=139.24 00:13:04.010 lat (usec): min=222, max=1041, avg=638.88, stdev=141.46 00:13:04.010 clat percentiles (usec): 00:13:04.010 | 1.00th=[ 285], 5.00th=[ 367], 10.00th=[ 424], 20.00th=[ 486], 00:13:04.010 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 660], 00:13:04.010 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 816], 00:13:04.010 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1004], 99.95th=[ 1004], 00:13:04.010 | 99.99th=[ 1004] 00:13:04.010 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:13:04.010 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:04.010 lat (usec) : 250=0.57%, 500=21.17%, 750=59.17%, 1000=15.69% 00:13:04.010 lat (msec) : 2=0.38%, 50=3.02% 00:13:04.010 cpu : usr=1.10%, sys=1.20%, ctx=529, majf=0, minf=1 00:13:04.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:04.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.010 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:04.010 job1: (groupid=0, jobs=1): err= 0: pid=785176: Fri Nov 29 12:57:06 2024 00:13:04.010 read: IOPS=274, BW=1098KiB/s (1124kB/s)(1100KiB/1002msec) 00:13:04.010 slat (nsec): min=19430, max=65243, avg=26874.86, stdev=3357.77 00:13:04.010 clat (usec): min=825, max=41720, avg=2523.65, stdev=7510.00 00:13:04.010 lat (usec): min=852, max=41747, avg=2550.53, stdev=7509.95 00:13:04.010 clat percentiles (usec): 00:13:04.010 | 1.00th=[ 881], 5.00th=[ 947], 10.00th=[ 988], 20.00th=[ 1029], 00:13:04.010 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:13:04.011 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1205], 00:13:04.011 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:04.011 | 99.99th=[41681] 00:13:04.011 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:13:04.011 slat (nsec): min=8796, max=54763, avg=30967.59, stdev=7744.32 00:13:04.011 clat (usec): min=236, max=896, avg=543.10, stdev=124.75 00:13:04.011 lat (usec): min=246, max=929, avg=574.06, stdev=126.94 00:13:04.011 clat percentiles (usec): 00:13:04.011 | 1.00th=[ 277], 5.00th=[ 338], 10.00th=[ 367], 20.00th=[ 429], 00:13:04.011 | 30.00th=[ 474], 40.00th=[ 506], 50.00th=[ 553], 60.00th=[ 586], 00:13:04.011 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 742], 00:13:04.011 | 99.00th=[ 799], 99.50th=[ 840], 99.90th=[ 898], 99.95th=[ 898], 00:13:04.011 | 99.99th=[ 898] 00:13:04.011 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:13:04.011 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:04.011 lat (usec) : 250=0.13%, 500=24.90%, 750=37.10%, 1000=7.37% 00:13:04.011 lat (msec) : 2=29.22%, 50=1.27% 00:13:04.011 cpu : usr=1.60%, sys=3.10%, ctx=787, majf=0, minf=1 00:13:04.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:04.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.011 issued rwts: total=275,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:04.011 job2: (groupid=0, jobs=1): err= 0: pid=785177: Fri Nov 29 12:57:06 2024 00:13:04.011 read: IOPS=20, BW=81.0KiB/s (82.9kB/s)(84.0KiB/1037msec) 00:13:04.011 slat (nsec): min=25517, max=26422, avg=25901.95, stdev=218.92 00:13:04.011 clat (usec): min=796, max=41165, avg=35253.41, stdev=14389.02 00:13:04.011 lat (usec): min=822, max=41191, avg=35279.32, stdev=14388.99 00:13:04.011 clat percentiles (usec): 00:13:04.011 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 963], 20.00th=[41157], 00:13:04.011 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:04.011 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:04.011 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:04.011 | 99.99th=[41157] 00:13:04.011 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:13:04.011 slat (nsec): min=9482, max=56620, avg=29454.72, stdev=8408.77 00:13:04.011 clat (usec): min=164, max=837, avg=540.80, stdev=113.35 00:13:04.011 lat (usec): min=174, max=869, avg=570.26, stdev=116.19 00:13:04.011 clat percentiles (usec): 00:13:04.011 | 1.00th=[ 223], 5.00th=[ 363], 10.00th=[ 392], 20.00th=[ 441], 00:13:04.011 | 30.00th=[ 498], 40.00th=[ 519], 50.00th=[ 545], 60.00th=[ 578], 00:13:04.011 | 70.00th=[ 611], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 701], 00:13:04.011 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 840], 99.95th=[ 840], 00:13:04.011 | 99.99th=[ 840] 00:13:04.011 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:13:04.011 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:04.011 lat (usec) : 250=1.31%, 500=29.27%, 750=63.23%, 1000=2.81% 00:13:04.011 lat (msec) : 50=3.38% 00:13:04.011 cpu : usr=0.77%, sys=1.45%, ctx=533, majf=0, minf=1 00:13:04.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:04.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.011 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:04.011 job3: (groupid=0, jobs=1): err= 0: pid=785178: Fri Nov 29 12:57:06 2024 00:13:04.011 read: IOPS=17, BW=69.1KiB/s (70.8kB/s)(72.0KiB/1042msec) 00:13:04.011 slat (nsec): min=26982, max=31343, avg=27479.17, stdev=991.65 00:13:04.011 clat (usec): min=41735, max=42077, avg=41950.06, stdev=83.51 00:13:04.011 lat (usec): min=41762, max=42104, avg=41977.54, stdev=83.14 00:13:04.011 clat percentiles (usec): 00:13:04.011 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:13:04.011 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:04.011 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:04.011 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:04.011 | 99.99th=[42206] 00:13:04.011 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:13:04.011 slat (nsec): min=9829, max=52812, avg=30780.01, stdev=9240.89 00:13:04.011 clat (usec): min=176, max=879, avg=520.41, stdev=130.45 00:13:04.011 lat (usec): min=193, max=915, avg=551.19, stdev=133.68 00:13:04.011 clat percentiles (usec): 00:13:04.011 | 1.00th=[ 260], 5.00th=[ 306], 10.00th=[ 351], 20.00th=[ 400], 00:13:04.011 | 30.00th=[ 445], 40.00th=[ 486], 50.00th=[ 529], 60.00th=[ 562], 00:13:04.011 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 685], 95.00th=[ 734], 00:13:04.011 | 99.00th=[ 791], 99.50th=[ 832], 99.90th=[ 881], 99.95th=[ 881], 00:13:04.011 | 99.99th=[ 881] 00:13:04.011 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:13:04.011 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:04.011 lat (usec) : 250=0.94%, 500=40.38%, 750=52.26%, 1000=3.02% 00:13:04.011 lat (msec) : 50=3.40% 00:13:04.011 cpu : usr=0.77%, sys=1.44%, ctx=531, majf=0, minf=1 00:13:04.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:04.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.011 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:04.011 00:13:04.011 Run status group 0 (all jobs): 00:13:04.011 READ: bw=1271KiB/s (1301kB/s), 67.9KiB/s-1098KiB/s (69.6kB/s-1124kB/s), io=1324KiB (1356kB), run=1001-1042msec 00:13:04.011 WRITE: bw=7862KiB/s (8050kB/s), 1965KiB/s-2046KiB/s (2013kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1042msec 00:13:04.011 00:13:04.011 Disk stats (read/write): 00:13:04.011 nvme0n1: ios=63/512, merge=0/0, ticks=559/291, in_queue=850, util=87.78% 00:13:04.011 nvme0n2: ios=248/512, merge=0/0, ticks=545/227, in_queue=772, util=88.06% 00:13:04.011 nvme0n3: ios=16/512, merge=0/0, ticks=536/262, in_queue=798, util=88.38% 00:13:04.011 nvme0n4: ios=36/512, merge=0/0, ticks=1460/244, in_queue=1704, util=96.58% 00:13:04.011 12:57:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:04.011 [global] 00:13:04.011 thread=1 00:13:04.011 invalidate=1 00:13:04.011 rw=write 00:13:04.011 time_based=1 00:13:04.011 runtime=1 00:13:04.011 ioengine=libaio 00:13:04.011 direct=1 00:13:04.011 bs=4096 00:13:04.011 iodepth=128 00:13:04.011 norandommap=0 00:13:04.011 numjobs=1 00:13:04.011 00:13:04.011 verify_dump=1 00:13:04.011 verify_backlog=512 00:13:04.011 verify_state_save=0 00:13:04.011 do_verify=1 00:13:04.011 verify=crc32c-intel 00:13:04.011 [job0] 00:13:04.011 filename=/dev/nvme0n1 00:13:04.011 [job1] 00:13:04.011 filename=/dev/nvme0n2 00:13:04.011 [job2] 00:13:04.011 filename=/dev/nvme0n3 00:13:04.011 [job3] 00:13:04.011 filename=/dev/nvme0n4 00:13:04.011 Could not set queue depth (nvme0n1) 00:13:04.011 Could not set queue depth (nvme0n2) 00:13:04.011 Could not set queue depth (nvme0n3) 00:13:04.011 Could not set queue depth (nvme0n4) 00:13:04.279 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:04.279 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:04.279 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:04.279 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:04.279 fio-3.35 00:13:04.279 Starting 4 threads 00:13:05.689 00:13:05.689 job0: (groupid=0, jobs=1): err= 0: pid=785704: Fri Nov 29 12:57:08 2024 00:13:05.689 read: IOPS=6034, BW=23.6MiB/s (24.7MB/s)(23.9MiB/1012msec) 00:13:05.689 slat (nsec): min=893, max=25333k, avg=79790.84, stdev=748834.17 00:13:05.690 clat (usec): min=1446, max=53853, avg=10792.57, stdev=6862.14 00:13:05.690 lat (usec): min=1471, max=53881, avg=10872.36, stdev=6909.06 00:13:05.690 clat percentiles (usec): 00:13:05.690 | 1.00th=[ 2737], 5.00th=[ 4555], 10.00th=[ 5538], 20.00th=[ 6980], 00:13:05.690 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 9110], 60.00th=[ 9896], 00:13:05.690 | 70.00th=[10945], 80.00th=[12649], 90.00th=[15533], 95.00th=[27395], 00:13:05.690 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[46924], 00:13:05.690 | 99.99th=[53740] 00:13:05.690 write: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1012msec); 0 zone resets 00:13:05.690 slat (nsec): min=1571, max=19814k, avg=69105.95, stdev=651908.26 00:13:05.690 clat (usec): min=293, max=56083, avg=10167.63, stdev=8600.06 00:13:05.690 lat (usec): min=326, max=56106, avg=10236.74, stdev=8667.71 00:13:05.690 clat percentiles (usec): 00:13:05.690 | 1.00th=[ 1319], 5.00th=[ 3097], 10.00th=[ 4228], 20.00th=[ 5276], 00:13:05.690 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6980], 60.00th=[ 8094], 00:13:05.690 | 70.00th=[ 9372], 80.00th=[12256], 90.00th=[20317], 95.00th=[33817], 00:13:05.690 | 99.00th=[40633], 99.50th=[41681], 99.90th=[46924], 99.95th=[53216], 00:13:05.690 | 99.99th=[55837] 00:13:05.690 bw ( KiB/s): min=18512, max=30640, per=29.65%, avg=24576.00, stdev=8575.79, samples=2 00:13:05.690 iops : min= 4628, max= 7660, avg=6144.00, stdev=2143.95, samples=2 00:13:05.690 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.14% 00:13:05.690 lat (msec) : 2=1.13%, 4=5.31%, 10=61.49%, 20=22.43%, 50=9.41% 00:13:05.690 lat (msec) : 100=0.04% 00:13:05.690 cpu : usr=5.24%, sys=6.03%, ctx=347, majf=0, minf=1 00:13:05.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:05.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:05.690 issued rwts: total=6107,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:05.690 job1: (groupid=0, jobs=1): err= 0: pid=785706: Fri Nov 29 12:57:08 2024 00:13:05.690 read: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:13:05.690 slat (nsec): min=954, max=16029k, avg=52981.12, stdev=476947.28 00:13:05.690 clat (usec): min=1098, max=65375, avg=8763.15, stdev=6719.35 00:13:05.690 lat (usec): min=1103, max=65376, avg=8816.13, stdev=6735.12 00:13:05.690 clat percentiles (usec): 00:13:05.690 | 1.00th=[ 1549], 5.00th=[ 2311], 10.00th=[ 4228], 20.00th=[ 5473], 00:13:05.690 | 30.00th=[ 6259], 40.00th=[ 6718], 50.00th=[ 7177], 60.00th=[ 7963], 00:13:05.690 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[13960], 95.00th=[20055], 00:13:05.690 | 99.00th=[30540], 99.50th=[60031], 99.90th=[65274], 99.95th=[65274], 00:13:05.690 | 99.99th=[65274] 00:13:05.690 write: IOPS=7422, BW=29.0MiB/s (30.4MB/s)(29.1MiB/1005msec); 0 zone resets 00:13:05.690 slat (nsec): min=1598, max=25428k, avg=55157.62, stdev=446895.12 00:13:05.690 clat (usec): min=437, max=76836, avg=8661.26, stdev=7612.35 00:13:05.690 lat (usec): min=473, max=76838, avg=8716.42, stdev=7639.24 00:13:05.690 clat percentiles (usec): 00:13:05.690 | 1.00th=[ 1500], 5.00th=[ 2409], 10.00th=[ 3228], 20.00th=[ 4555], 00:13:05.690 | 30.00th=[ 5211], 40.00th=[ 5735], 50.00th=[ 6194], 60.00th=[ 6849], 00:13:05.690 | 70.00th=[ 8848], 80.00th=[11469], 90.00th=[17171], 95.00th=[24249], 00:13:05.690 | 99.00th=[33817], 99.50th=[58459], 99.90th=[74974], 99.95th=[77071], 00:13:05.690 | 99.99th=[77071] 00:13:05.690 bw ( KiB/s): min=26424, max=32240, per=35.39%, avg=29332.00, stdev=4112.53, samples=2 00:13:05.690 iops : min= 6606, max= 8060, avg=7333.00, stdev=1028.13, samples=2 00:13:05.690 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.11% 00:13:05.690 lat (msec) : 2=2.79%, 4=9.77%, 10=65.23%, 20=14.33%, 50=7.05% 00:13:05.690 lat (msec) : 100=0.67% 00:13:05.690 cpu : usr=5.38%, sys=8.47%, ctx=554, majf=0, minf=1 00:13:05.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:05.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:05.690 issued rwts: total=7168,7460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:05.690 job2: (groupid=0, jobs=1): err= 0: pid=785707: Fri Nov 29 12:57:08 2024 00:13:05.690 read: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(15.5MiB/1052msec) 00:13:05.690 slat (nsec): min=931, max=21950k, avg=125207.81, stdev=1049095.07 00:13:05.690 clat (usec): min=2049, max=65768, avg=17528.06, stdev=12121.37 00:13:05.690 lat (usec): min=2064, max=65775, avg=17653.27, stdev=12191.94 00:13:05.690 clat percentiles (usec): 00:13:05.690 | 1.00th=[ 2769], 5.00th=[ 5669], 10.00th=[ 7570], 20.00th=[ 9503], 00:13:05.690 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[12125], 60.00th=[15401], 00:13:05.690 | 70.00th=[21103], 80.00th=[25560], 90.00th=[34341], 95.00th=[44303], 00:13:05.690 | 99.00th=[55837], 99.50th=[55837], 99.90th=[65799], 99.95th=[65799], 00:13:05.690 | 99.99th=[65799] 00:13:05.690 write: IOPS=3893, BW=15.2MiB/s (15.9MB/s)(16.0MiB/1052msec); 0 zone resets 00:13:05.690 slat (nsec): min=1564, max=13363k, avg=115769.43, stdev=783576.41 00:13:05.690 clat (usec): min=2342, max=38757, avg=15513.95, stdev=7531.68 00:13:05.690 lat (usec): min=2352, max=39749, avg=15629.72, stdev=7591.64 00:13:05.690 clat percentiles (usec): 00:13:05.690 | 1.00th=[ 2868], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[ 8979], 00:13:05.690 | 30.00th=[10290], 40.00th=[12780], 50.00th=[13173], 60.00th=[14615], 00:13:05.690 | 70.00th=[18220], 80.00th=[22414], 90.00th=[26346], 95.00th=[30802], 00:13:05.690 | 99.00th=[35914], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:13:05.690 | 99.99th=[38536] 00:13:05.690 bw ( KiB/s): min=16384, max=16384, per=19.77%, avg=16384.00, stdev= 0.00, samples=2 00:13:05.690 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:05.690 lat (msec) : 4=3.12%, 10=27.27%, 20=40.64%, 50=27.38%, 100=1.59% 00:13:05.690 cpu : usr=2.76%, sys=3.81%, ctx=326, majf=0, minf=2 00:13:05.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:05.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:05.690 issued rwts: total=3956,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:05.690 job3: (groupid=0, jobs=1): err= 0: pid=785708: Fri Nov 29 12:57:08 2024 00:13:05.690 read: IOPS=3526, BW=13.8MiB/s (14.4MB/s)(14.5MiB/1052msec) 00:13:05.690 slat (nsec): min=971, max=15645k, avg=120930.06, stdev=871917.99 00:13:05.690 clat (usec): min=4940, max=90462, avg=17158.64, stdev=13032.71 00:13:05.690 lat (usec): min=4945, max=90472, avg=17279.57, stdev=13083.75 00:13:05.690 clat percentiles (usec): 00:13:05.690 | 1.00th=[ 5997], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9634], 00:13:05.690 | 30.00th=[10945], 40.00th=[11863], 50.00th=[14091], 60.00th=[15008], 00:13:05.690 | 70.00th=[17695], 80.00th=[20841], 90.00th=[26084], 95.00th=[36963], 00:13:05.690 | 99.00th=[84411], 99.50th=[87557], 99.90th=[90702], 99.95th=[90702], 00:13:05.690 | 99.99th=[90702] 00:13:05.690 write: IOPS=3893, BW=15.2MiB/s (15.9MB/s)(16.0MiB/1052msec); 0 zone resets 00:13:05.691 slat (nsec): min=1761, max=13525k, avg=130738.94, stdev=826271.36 00:13:05.691 clat (usec): min=3316, max=90477, avg=16920.95, stdev=9981.37 00:13:05.691 lat (usec): min=3324, max=90486, avg=17051.69, stdev=10042.53 00:13:05.691 clat percentiles (usec): 00:13:05.691 | 1.00th=[ 4555], 5.00th=[ 5669], 10.00th=[ 7111], 20.00th=[ 8356], 00:13:05.691 | 30.00th=[ 9896], 40.00th=[12256], 50.00th=[13304], 60.00th=[16188], 00:13:05.691 | 70.00th=[22152], 80.00th=[23987], 90.00th=[30278], 95.00th=[39060], 00:13:05.691 | 99.00th=[46924], 99.50th=[51643], 99.90th=[54264], 99.95th=[54264], 00:13:05.691 | 99.99th=[90702] 00:13:05.691 bw ( KiB/s): min=14392, max=18360, per=19.76%, avg=16376.00, stdev=2805.80, samples=2 00:13:05.691 iops : min= 3598, max= 4590, avg=4094.00, stdev=701.45, samples=2 00:13:05.691 lat (msec) : 4=0.20%, 10=27.12%, 20=41.33%, 50=29.43%, 100=1.92% 00:13:05.691 cpu : usr=2.57%, sys=4.66%, ctx=300, majf=0, minf=1 00:13:05.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:05.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:05.691 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:05.691 00:13:05.691 Run status group 0 (all jobs): 00:13:05.691 READ: bw=77.8MiB/s (81.5MB/s), 13.8MiB/s-27.9MiB/s (14.4MB/s-29.2MB/s), io=81.8MiB (85.8MB), run=1005-1052msec 00:13:05.691 WRITE: bw=80.9MiB/s (84.9MB/s), 15.2MiB/s-29.0MiB/s (15.9MB/s-30.4MB/s), io=85.1MiB (89.3MB), run=1005-1052msec 00:13:05.691 00:13:05.691 Disk stats (read/write): 00:13:05.691 nvme0n1: ios=5170/5592, merge=0/0, ticks=42223/47034, in_queue=89257, util=87.88% 00:13:05.691 nvme0n2: ios=6220/6656, merge=0/0, ticks=47875/43506, in_queue=91381, util=97.76% 00:13:05.691 nvme0n3: ios=3324/3584, merge=0/0, ticks=27515/28767, in_queue=56282, util=88.40% 00:13:05.691 nvme0n4: ios=3093/3584, merge=0/0, ticks=28975/31954, in_queue=60929, util=96.90% 00:13:05.691 12:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:05.691 [global] 00:13:05.691 thread=1 00:13:05.691 invalidate=1 00:13:05.691 rw=randwrite 00:13:05.691 time_based=1 00:13:05.691 runtime=1 00:13:05.691 ioengine=libaio 00:13:05.691 direct=1 00:13:05.691 bs=4096 00:13:05.691 iodepth=128 00:13:05.691 norandommap=0 00:13:05.691 numjobs=1 00:13:05.691 00:13:05.691 verify_dump=1 00:13:05.691 verify_backlog=512 00:13:05.691 verify_state_save=0 00:13:05.691 do_verify=1 00:13:05.691 verify=crc32c-intel 00:13:05.691 [job0] 00:13:05.691 filename=/dev/nvme0n1 00:13:05.691 [job1] 00:13:05.691 filename=/dev/nvme0n2 00:13:05.691 [job2] 00:13:05.691 filename=/dev/nvme0n3 00:13:05.691 [job3] 00:13:05.691 filename=/dev/nvme0n4 00:13:05.691 Could not set queue depth (nvme0n1) 00:13:05.691 Could not set queue depth (nvme0n2) 00:13:05.691 Could not set queue depth (nvme0n3) 00:13:05.691 Could not set queue depth (nvme0n4) 00:13:05.954 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:05.954 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:05.954 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:05.954 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:05.954 fio-3.35 00:13:05.954 Starting 4 threads 00:13:07.361 00:13:07.361 job0: (groupid=0, jobs=1): err= 0: pid=786224: Fri Nov 29 12:57:09 2024 00:13:07.361 read: IOPS=4363, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1006msec) 00:13:07.361 slat (nsec): min=979, max=10966k, avg=110257.13, stdev=782390.34 00:13:07.361 clat (usec): min=3195, max=43153, avg=14905.12, stdev=8432.13 00:13:07.361 lat (usec): min=4158, max=46623, avg=15015.38, stdev=8516.09 00:13:07.361 clat percentiles (usec): 00:13:07.361 | 1.00th=[ 4686], 5.00th=[ 7111], 10.00th=[ 7701], 20.00th=[ 8455], 00:13:07.361 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[11207], 60.00th=[12649], 00:13:07.361 | 70.00th=[17695], 80.00th=[22414], 90.00th=[27919], 95.00th=[32113], 00:13:07.361 | 99.00th=[39060], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:13:07.361 | 99.99th=[43254] 00:13:07.361 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:13:07.361 slat (nsec): min=1587, max=12249k, avg=91708.90, stdev=629266.64 00:13:07.361 clat (usec): min=1231, max=70215, avg=13415.87, stdev=9814.17 00:13:07.361 lat (usec): min=1242, max=70217, avg=13507.58, stdev=9882.19 00:13:07.361 clat percentiles (usec): 00:13:07.361 | 1.00th=[ 2737], 5.00th=[ 4883], 10.00th=[ 5735], 20.00th=[ 6849], 00:13:07.361 | 30.00th=[ 7439], 40.00th=[ 8586], 50.00th=[10028], 60.00th=[11994], 00:13:07.361 | 70.00th=[13960], 80.00th=[17695], 90.00th=[25297], 95.00th=[36963], 00:13:07.361 | 99.00th=[49021], 99.50th=[52167], 99.90th=[69731], 99.95th=[69731], 00:13:07.361 | 99.99th=[69731] 00:13:07.361 bw ( KiB/s): min=17656, max=19208, per=22.33%, avg=18432.00, stdev=1097.43, samples=2 00:13:07.361 iops : min= 4414, max= 4802, avg=4608.00, stdev=274.36, samples=2 00:13:07.361 lat (msec) : 2=0.14%, 4=1.60%, 10=43.69%, 20=34.72%, 50=19.39% 00:13:07.361 lat (msec) : 100=0.46% 00:13:07.361 cpu : usr=3.58%, sys=5.07%, ctx=288, majf=0, minf=1 00:13:07.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:07.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:07.361 issued rwts: total=4390,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:07.361 job1: (groupid=0, jobs=1): err= 0: pid=786225: Fri Nov 29 12:57:09 2024 00:13:07.361 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:13:07.361 slat (nsec): min=991, max=12574k, avg=74220.66, stdev=555665.06 00:13:07.361 clat (usec): min=3047, max=36613, avg=9549.30, stdev=3766.60 00:13:07.361 lat (usec): min=3052, max=36650, avg=9623.52, stdev=3810.84 00:13:07.361 clat percentiles (usec): 00:13:07.361 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 6194], 20.00th=[ 7046], 00:13:07.361 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 8979], 00:13:07.361 | 70.00th=[ 9765], 80.00th=[11338], 90.00th=[14353], 95.00th=[16909], 00:13:07.361 | 99.00th=[23987], 99.50th=[24249], 99.90th=[33424], 99.95th=[33424], 00:13:07.361 | 99.99th=[36439] 00:13:07.361 write: IOPS=5825, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1009msec); 0 zone resets 00:13:07.361 slat (nsec): min=1685, max=11862k, avg=92836.85, stdev=580313.73 00:13:07.361 clat (usec): min=1153, max=80834, avg=12544.69, stdev=12565.40 00:13:07.361 lat (usec): min=1166, max=84088, avg=12637.53, stdev=12651.56 00:13:07.361 clat percentiles (usec): 00:13:07.361 | 1.00th=[ 3589], 5.00th=[ 3884], 10.00th=[ 4080], 20.00th=[ 5407], 00:13:07.361 | 30.00th=[ 6063], 40.00th=[ 6783], 50.00th=[ 7308], 60.00th=[ 8848], 00:13:07.361 | 70.00th=[12518], 80.00th=[17957], 90.00th=[23200], 95.00th=[38536], 00:13:07.361 | 99.00th=[69731], 99.50th=[76022], 99.90th=[80217], 99.95th=[80217], 00:13:07.361 | 99.99th=[81265] 00:13:07.361 bw ( KiB/s): min=22376, max=23632, per=27.87%, avg=23004.00, stdev=888.13, samples=2 00:13:07.361 iops : min= 5594, max= 5908, avg=5751.00, stdev=222.03, samples=2 00:13:07.361 lat (msec) : 2=0.02%, 4=4.22%, 10=64.91%, 20=21.53%, 50=7.55% 00:13:07.361 lat (msec) : 100=1.77% 00:13:07.361 cpu : usr=4.96%, sys=5.85%, ctx=371, majf=0, minf=1 00:13:07.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:07.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:07.361 issued rwts: total=5632,5878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:07.361 job2: (groupid=0, jobs=1): err= 0: pid=786226: Fri Nov 29 12:57:09 2024 00:13:07.361 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:13:07.361 slat (nsec): min=934, max=15903k, avg=92574.56, stdev=659894.75 00:13:07.361 clat (usec): min=4697, max=48172, avg=12359.93, stdev=7816.58 00:13:07.361 lat (usec): min=4702, max=48199, avg=12452.50, stdev=7887.34 00:13:07.361 clat percentiles (usec): 00:13:07.361 | 1.00th=[ 5997], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7111], 00:13:07.361 | 30.00th=[ 7308], 40.00th=[ 8029], 50.00th=[ 9241], 60.00th=[10028], 00:13:07.361 | 70.00th=[11076], 80.00th=[17695], 90.00th=[25035], 95.00th=[29492], 00:13:07.361 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[46924], 00:13:07.361 | 99.99th=[47973] 00:13:07.361 write: IOPS=5183, BW=20.2MiB/s (21.2MB/s)(20.4MiB/1006msec); 0 zone resets 00:13:07.361 slat (nsec): min=1537, max=12399k, avg=96810.52, stdev=568147.14 00:13:07.361 clat (usec): min=3257, max=61414, avg=12199.76, stdev=8974.37 00:13:07.361 lat (usec): min=3261, max=61422, avg=12296.57, stdev=9042.00 00:13:07.361 clat percentiles (usec): 00:13:07.362 | 1.00th=[ 5211], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 6980], 00:13:07.362 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 8455], 60.00th=[12387], 00:13:07.362 | 70.00th=[13304], 80.00th=[13566], 90.00th=[19268], 95.00th=[26084], 00:13:07.362 | 99.00th=[58459], 99.50th=[60031], 99.90th=[61604], 99.95th=[61604], 00:13:07.362 | 99.99th=[61604] 00:13:07.362 bw ( KiB/s): min=19848, max=21112, per=24.81%, avg=20480.00, stdev=893.78, samples=2 00:13:07.362 iops : min= 4962, max= 5278, avg=5120.00, stdev=223.45, samples=2 00:13:07.362 lat (msec) : 4=0.14%, 10=57.74%, 20=29.28%, 50=11.81%, 100=1.04% 00:13:07.362 cpu : usr=3.48%, sys=4.38%, ctx=531, majf=0, minf=2 00:13:07.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:07.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:07.362 issued rwts: total=5120,5215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:07.362 job3: (groupid=0, jobs=1): err= 0: pid=786227: Fri Nov 29 12:57:09 2024 00:13:07.362 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1005msec) 00:13:07.362 slat (nsec): min=930, max=12168k, avg=83533.79, stdev=605813.98 00:13:07.362 clat (usec): min=2834, max=26828, avg=10389.08, stdev=3951.27 00:13:07.362 lat (usec): min=3535, max=26831, avg=10472.61, stdev=3988.94 00:13:07.362 clat percentiles (usec): 00:13:07.362 | 1.00th=[ 5211], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7570], 00:13:07.362 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[ 9765], 00:13:07.362 | 70.00th=[10552], 80.00th=[12518], 90.00th=[15533], 95.00th=[20841], 00:13:07.362 | 99.00th=[22938], 99.50th=[25822], 99.90th=[26346], 99.95th=[26870], 00:13:07.362 | 99.99th=[26870] 00:13:07.362 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:13:07.362 slat (nsec): min=1617, max=31708k, avg=99163.59, stdev=655898.70 00:13:07.362 clat (usec): min=1155, max=81324, avg=14547.62, stdev=12252.85 00:13:07.362 lat (usec): min=1167, max=81332, avg=14646.79, stdev=12319.60 00:13:07.362 clat percentiles (usec): 00:13:07.362 | 1.00th=[ 2802], 5.00th=[ 4228], 10.00th=[ 5604], 20.00th=[ 6325], 00:13:07.362 | 30.00th=[ 7177], 40.00th=[10159], 50.00th=[12518], 60.00th=[13304], 00:13:07.362 | 70.00th=[14091], 80.00th=[19006], 90.00th=[25822], 95.00th=[36963], 00:13:07.362 | 99.00th=[78119], 99.50th=[80217], 99.90th=[80217], 99.95th=[81265], 00:13:07.362 | 99.99th=[81265] 00:13:07.362 bw ( KiB/s): min=18504, max=22456, per=24.81%, avg=20480.00, stdev=2794.49, samples=2 00:13:07.362 iops : min= 4626, max= 5614, avg=5120.00, stdev=698.62, samples=2 00:13:07.362 lat (msec) : 2=0.17%, 4=1.66%, 10=49.72%, 20=37.06%, 50=10.02% 00:13:07.362 lat (msec) : 100=1.37% 00:13:07.362 cpu : usr=4.38%, sys=4.88%, ctx=504, majf=0, minf=2 00:13:07.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:07.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:07.362 issued rwts: total=5100,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:07.362 00:13:07.362 Run status group 0 (all jobs): 00:13:07.362 READ: bw=78.4MiB/s (82.2MB/s), 17.0MiB/s-21.8MiB/s (17.9MB/s-22.9MB/s), io=79.1MiB (82.9MB), run=1005-1009msec 00:13:07.362 WRITE: bw=80.6MiB/s (84.5MB/s), 17.9MiB/s-22.8MiB/s (18.8MB/s-23.9MB/s), io=81.3MiB (85.3MB), run=1005-1009msec 00:13:07.362 00:13:07.362 Disk stats (read/write): 00:13:07.362 nvme0n1: ios=3791/4096, merge=0/0, ticks=35540/41965, in_queue=77505, util=95.79% 00:13:07.362 nvme0n2: ios=4301/4608, merge=0/0, ticks=42487/60245, in_queue=102732, util=98.27% 00:13:07.362 nvme0n3: ios=4659/4663, merge=0/0, ticks=26852/23405, in_queue=50257, util=91.89% 00:13:07.362 nvme0n4: ios=3982/4096, merge=0/0, ticks=39673/57108, in_queue=96781, util=92.03% 00:13:07.362 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:07.362 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=786741 00:13:07.362 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:07.362 12:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:07.362 [global] 00:13:07.362 thread=1 00:13:07.362 invalidate=1 00:13:07.362 rw=read 00:13:07.362 time_based=1 00:13:07.362 runtime=10 00:13:07.362 ioengine=libaio 00:13:07.362 direct=1 00:13:07.362 bs=4096 00:13:07.362 iodepth=1 00:13:07.362 norandommap=1 00:13:07.362 numjobs=1 00:13:07.362 00:13:07.362 [job0] 00:13:07.362 filename=/dev/nvme0n1 00:13:07.362 [job1] 00:13:07.362 filename=/dev/nvme0n2 00:13:07.362 [job2] 00:13:07.362 filename=/dev/nvme0n3 00:13:07.362 [job3] 00:13:07.362 filename=/dev/nvme0n4 00:13:07.362 Could not set queue depth (nvme0n1) 00:13:07.362 Could not set queue depth (nvme0n2) 00:13:07.362 Could not set queue depth (nvme0n3) 00:13:07.362 Could not set queue depth (nvme0n4) 00:13:07.626 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:07.626 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:07.626 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:07.626 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:07.626 fio-3.35 00:13:07.626 Starting 4 threads 00:13:10.176 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:10.437 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10006528, buflen=4096 00:13:10.437 fio: pid=787021, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:10.437 12:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:10.437 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11051008, buflen=4096 00:13:10.437 fio: pid=787015, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:10.437 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:10.437 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:10.698 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11124736, buflen=4096 00:13:10.698 fio: pid=786991, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:10.698 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:10.698 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:10.959 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11780096, buflen=4096 00:13:10.959 fio: pid=786994, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:10.959 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:10.959 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:10.959 00:13:10.959 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=786991: Fri Nov 29 12:57:13 2024 00:13:10.959 read: IOPS=919, BW=3676KiB/s (3765kB/s)(10.6MiB/2955msec) 00:13:10.959 slat (usec): min=7, max=30904, avg=52.89, stdev=780.95 00:13:10.959 clat (usec): min=578, max=1361, avg=1020.58, stdev=91.21 00:13:10.959 lat (usec): min=603, max=31912, avg=1073.49, stdev=787.28 00:13:10.959 clat percentiles (usec): 00:13:10.959 | 1.00th=[ 758], 5.00th=[ 832], 10.00th=[ 889], 20.00th=[ 963], 00:13:10.959 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:13:10.959 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1139], 00:13:10.959 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1336], 99.95th=[ 1352], 00:13:10.959 | 99.99th=[ 1369] 00:13:10.959 bw ( KiB/s): min= 3736, max= 3832, per=27.64%, avg=3785.60, stdev=38.12, samples=5 00:13:10.959 iops : min= 934, max= 958, avg=946.40, stdev= 9.53, samples=5 00:13:10.959 lat (usec) : 750=0.74%, 1000=30.36% 00:13:10.959 lat (msec) : 2=68.86% 00:13:10.959 cpu : usr=1.18%, sys=2.74%, ctx=2722, majf=0, minf=1 00:13:10.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:10.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.959 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.959 issued rwts: total=2717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:10.959 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=786994: Fri Nov 29 12:57:13 2024 00:13:10.959 read: IOPS=917, BW=3670KiB/s (3758kB/s)(11.2MiB/3135msec) 00:13:10.959 slat (usec): min=6, max=18626, avg=53.17, stdev=592.14 00:13:10.959 clat (usec): min=561, max=6107, avg=1024.71, stdev=135.46 00:13:10.959 lat (usec): min=589, max=19859, avg=1077.89, stdev=610.08 00:13:10.959 clat percentiles (usec): 00:13:10.959 | 1.00th=[ 766], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 963], 00:13:10.959 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:13:10.959 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:13:10.959 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1500], 99.95th=[ 2835], 00:13:10.959 | 99.99th=[ 6128] 00:13:10.959 bw ( KiB/s): min= 3318, max= 3832, per=27.01%, avg=3699.67, stdev=191.98, samples=6 00:13:10.959 iops : min= 829, max= 958, avg=924.83, stdev=48.19, samples=6 00:13:10.959 lat (usec) : 750=0.94%, 1000=34.52% 00:13:10.960 lat (msec) : 2=64.44%, 4=0.03%, 10=0.03% 00:13:10.960 cpu : usr=1.50%, sys=3.92%, ctx=2884, majf=0, minf=2 00:13:10.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:10.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.960 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.960 issued rwts: total=2877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:10.960 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=787015: Fri Nov 29 12:57:13 2024 00:13:10.960 read: IOPS=972, BW=3890KiB/s (3984kB/s)(10.5MiB/2774msec) 00:13:10.960 slat (usec): min=6, max=19639, avg=37.70, stdev=415.26 00:13:10.960 clat (usec): min=239, max=3426, avg=974.38, stdev=121.89 00:13:10.960 lat (usec): min=246, max=20552, avg=1012.08, stdev=432.26 00:13:10.960 clat percentiles (usec): 00:13:10.960 | 1.00th=[ 603], 5.00th=[ 775], 10.00th=[ 832], 20.00th=[ 898], 00:13:10.960 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1012], 00:13:10.960 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:13:10.960 | 99.00th=[ 1188], 99.50th=[ 1237], 99.90th=[ 1319], 99.95th=[ 1319], 00:13:10.960 | 99.99th=[ 3425] 00:13:10.960 bw ( KiB/s): min= 3864, max= 4064, per=28.88%, avg=3955.20, stdev=73.45, samples=5 00:13:10.960 iops : min= 966, max= 1016, avg=988.80, stdev=18.36, samples=5 00:13:10.960 lat (usec) : 250=0.04%, 500=0.26%, 750=3.41%, 1000=50.83% 00:13:10.960 lat (msec) : 2=45.39%, 4=0.04% 00:13:10.960 cpu : usr=1.59%, sys=4.11%, ctx=2701, majf=0, minf=2 00:13:10.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:10.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.960 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.960 issued rwts: total=2699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:10.960 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=787021: Fri Nov 29 12:57:13 2024 00:13:10.960 read: IOPS=943, BW=3774KiB/s (3865kB/s)(9772KiB/2589msec) 00:13:10.960 slat (nsec): min=6930, max=60958, avg=26177.78, stdev=3245.82 00:13:10.960 clat (usec): min=329, max=3449, avg=1016.30, stdev=117.27 00:13:10.960 lat (usec): min=355, max=3474, avg=1042.48, stdev=117.64 00:13:10.960 clat percentiles (usec): 00:13:10.960 | 1.00th=[ 685], 5.00th=[ 807], 10.00th=[ 873], 20.00th=[ 938], 00:13:10.960 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:13:10.960 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:13:10.960 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1254], 99.95th=[ 1303], 00:13:10.960 | 99.99th=[ 3458] 00:13:10.960 bw ( KiB/s): min= 3680, max= 3976, per=27.89%, avg=3819.20, stdev=117.93, samples=5 00:13:10.960 iops : min= 920, max= 994, avg=954.80, stdev=29.48, samples=5 00:13:10.960 lat (usec) : 500=0.04%, 750=2.33%, 1000=32.53% 00:13:10.960 lat (msec) : 2=65.02%, 4=0.04% 00:13:10.960 cpu : usr=1.20%, sys=2.70%, ctx=2444, majf=0, minf=2 00:13:10.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:10.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.960 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.960 issued rwts: total=2444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:10.960 00:13:10.960 Run status group 0 (all jobs): 00:13:10.960 READ: bw=13.4MiB/s (14.0MB/s), 3670KiB/s-3890KiB/s (3758kB/s-3984kB/s), io=41.9MiB (44.0MB), run=2589-3135msec 00:13:10.960 00:13:10.960 Disk stats (read/write): 00:13:10.960 nvme0n1: ios=2619/0, merge=0/0, ticks=2551/0, in_queue=2551, util=92.29% 00:13:10.960 nvme0n2: ios=2842/0, merge=0/0, ticks=2632/0, in_queue=2632, util=93.43% 00:13:10.960 nvme0n3: ios=2553/0, merge=0/0, ticks=2279/0, in_queue=2279, util=95.99% 00:13:10.960 nvme0n4: ios=2444/0, merge=0/0, ticks=2430/0, in_queue=2430, util=95.90% 00:13:10.960 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:10.960 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:11.221 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:11.221 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:11.481 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:11.481 12:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:11.741 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:11.741 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:11.741 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:11.741 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 786741 00:13:11.741 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:11.741 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:12.002 nvmf hotplug test: fio failed as expected 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:12.002 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:12.002 rmmod nvme_tcp 00:13:12.264 rmmod nvme_fabrics 00:13:12.264 rmmod nvme_keyring 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 782623 ']' 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 782623 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 782623 ']' 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 782623 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782623 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782623' 00:13:12.264 killing process with pid 782623 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 782623 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 782623 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.264 12:57:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.807 12:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.807 00:13:14.807 real 0m29.464s 00:13:14.807 user 2m41.634s 00:13:14.807 sys 0m9.800s 00:13:14.807 12:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.807 12:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.807 ************************************ 00:13:14.807 END TEST nvmf_fio_target 00:13:14.807 ************************************ 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:14.807 ************************************ 00:13:14.807 START TEST nvmf_bdevio 00:13:14.807 ************************************ 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:14.807 * Looking for test storage... 00:13:14.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.807 --rc genhtml_branch_coverage=1 00:13:14.807 --rc genhtml_function_coverage=1 00:13:14.807 --rc genhtml_legend=1 00:13:14.807 --rc geninfo_all_blocks=1 00:13:14.807 --rc geninfo_unexecuted_blocks=1 00:13:14.807 00:13:14.807 ' 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.807 --rc genhtml_branch_coverage=1 00:13:14.807 --rc genhtml_function_coverage=1 00:13:14.807 --rc genhtml_legend=1 00:13:14.807 --rc geninfo_all_blocks=1 00:13:14.807 --rc geninfo_unexecuted_blocks=1 00:13:14.807 00:13:14.807 ' 00:13:14.807 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.807 --rc genhtml_branch_coverage=1 00:13:14.807 --rc genhtml_function_coverage=1 00:13:14.807 --rc genhtml_legend=1 00:13:14.807 --rc geninfo_all_blocks=1 00:13:14.807 --rc geninfo_unexecuted_blocks=1 00:13:14.807 00:13:14.807 ' 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:14.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.808 --rc genhtml_branch_coverage=1 00:13:14.808 --rc genhtml_function_coverage=1 00:13:14.808 --rc genhtml_legend=1 00:13:14.808 --rc geninfo_all_blocks=1 00:13:14.808 --rc geninfo_unexecuted_blocks=1 00:13:14.808 00:13:14.808 ' 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.808 12:57:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:22.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:22.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:22.952 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:22.953 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:22.953 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:22.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:13:22.953 00:13:22.953 --- 10.0.0.2 ping statistics --- 00:13:22.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.953 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:13:22.953 00:13:22.953 --- 10.0.0.1 ping statistics --- 00:13:22.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.953 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=792284 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 792284 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 792284 ']' 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.953 12:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:22.953 [2024-11-29 12:57:24.890486] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:13:22.953 [2024-11-29 12:57:24.890553] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.953 [2024-11-29 12:57:24.992082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.953 [2024-11-29 12:57:25.044783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.953 [2024-11-29 12:57:25.044836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.953 [2024-11-29 12:57:25.044845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.953 [2024-11-29 12:57:25.044852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.953 [2024-11-29 12:57:25.044858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.953 [2024-11-29 12:57:25.046901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:22.953 [2024-11-29 12:57:25.047061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:22.953 [2024-11-29 12:57:25.047225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.953 [2024-11-29 12:57:25.047225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:23.216 [2024-11-29 12:57:25.772733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:23.216 Malloc0 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:23.216 [2024-11-29 12:57:25.845806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:23.216 { 00:13:23.216 "params": { 00:13:23.216 "name": "Nvme$subsystem", 00:13:23.216 "trtype": "$TEST_TRANSPORT", 00:13:23.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:23.216 "adrfam": "ipv4", 00:13:23.216 "trsvcid": "$NVMF_PORT", 00:13:23.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:23.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:23.216 "hdgst": ${hdgst:-false}, 00:13:23.216 "ddgst": ${ddgst:-false} 00:13:23.216 }, 00:13:23.216 "method": "bdev_nvme_attach_controller" 00:13:23.216 } 00:13:23.216 EOF 00:13:23.216 )") 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:23.216 12:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:23.216 "params": { 00:13:23.216 "name": "Nvme1", 00:13:23.216 "trtype": "tcp", 00:13:23.216 "traddr": "10.0.0.2", 00:13:23.216 "adrfam": "ipv4", 00:13:23.216 "trsvcid": "4420", 00:13:23.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.216 "hdgst": false, 00:13:23.216 "ddgst": false 00:13:23.216 }, 00:13:23.216 "method": "bdev_nvme_attach_controller" 00:13:23.216 }' 00:13:23.478 [2024-11-29 12:57:25.906497] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:13:23.478 [2024-11-29 12:57:25.906566] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792619 ] 00:13:23.478 [2024-11-29 12:57:25.999374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.478 [2024-11-29 12:57:26.055932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.478 [2024-11-29 12:57:26.056099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.478 [2024-11-29 12:57:26.056099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.739 I/O targets: 00:13:23.739 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:23.739 00:13:23.739 00:13:23.739 CUnit - A unit testing framework for C - Version 2.1-3 00:13:23.739 http://cunit.sourceforge.net/ 00:13:23.739 00:13:23.739 00:13:23.739 Suite: bdevio tests on: Nvme1n1 00:13:23.739 Test: blockdev write read block ...passed 00:13:24.001 Test: blockdev write zeroes read block ...passed 00:13:24.001 Test: blockdev write zeroes read no split ...passed 00:13:24.001 Test: blockdev write zeroes read split ...passed 00:13:24.001 Test: blockdev write zeroes read split partial ...passed 00:13:24.001 Test: blockdev reset ...[2024-11-29 12:57:26.554850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:24.001 [2024-11-29 12:57:26.554957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd6970 (9): Bad file descriptor 00:13:24.001 [2024-11-29 12:57:26.566108] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:24.001 passed 00:13:24.001 Test: blockdev write read 8 blocks ...passed 00:13:24.001 Test: blockdev write read size > 128k ...passed 00:13:24.001 Test: blockdev write read invalid size ...passed 00:13:24.001 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:24.001 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:24.001 Test: blockdev write read max offset ...passed 00:13:24.262 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:24.262 Test: blockdev writev readv 8 blocks ...passed 00:13:24.262 Test: blockdev writev readv 30 x 1block ...passed 00:13:24.262 Test: blockdev writev readv block ...passed 00:13:24.262 Test: blockdev writev readv size > 128k ...passed 00:13:24.262 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:24.262 Test: blockdev comparev and writev ...[2024-11-29 12:57:26.786494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:24.262 [2024-11-29 12:57:26.786541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:24.262 [2024-11-29 12:57:26.786558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:24.262 [2024-11-29 12:57:26.786567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:24.262 [2024-11-29 12:57:26.786956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:24.263 [2024-11-29 12:57:26.786970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:24.263 [2024-11-29 12:57:26.786984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:24.263 [2024-11-29 12:57:26.786993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:24.263 [2024-11-29 12:57:26.787392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:24.263 [2024-11-29 12:57:26.787405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:24.263 [2024-11-29 12:57:26.787419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:24.263 [2024-11-29 12:57:26.787428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:24.263 [2024-11-29 12:57:26.787811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:24.263 [2024-11-29 12:57:26.787823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:24.263 [2024-11-29 12:57:26.787837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:24.263 [2024-11-29 12:57:26.787845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:24.263 passed 00:13:24.263 Test: blockdev nvme passthru rw ...passed 00:13:24.263 Test: blockdev nvme passthru vendor specific ...[2024-11-29 12:57:26.873646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:24.263 [2024-11-29 12:57:26.873665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:24.263 [2024-11-29 12:57:26.873914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:24.263 [2024-11-29 12:57:26.873927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:24.263 [2024-11-29 12:57:26.874193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:24.263 [2024-11-29 12:57:26.874206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:24.263 [2024-11-29 12:57:26.874461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:24.263 [2024-11-29 12:57:26.874473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:24.263 passed 00:13:24.263 Test: blockdev nvme admin passthru ...passed 00:13:24.263 Test: blockdev copy ...passed 00:13:24.263 00:13:24.263 Run Summary: Type Total Ran Passed Failed Inactive 00:13:24.263 suites 1 1 n/a 0 0 00:13:24.263 tests 23 23 23 0 0 00:13:24.263 asserts 152 152 152 0 n/a 00:13:24.263 00:13:24.263 Elapsed time = 1.168 seconds 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.524 rmmod nvme_tcp 00:13:24.524 rmmod nvme_fabrics 00:13:24.524 rmmod nvme_keyring 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 792284 ']' 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 792284 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 792284 ']' 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 792284 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.524 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 792284 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 792284' 00:13:24.785 killing process with pid 792284 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 792284 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 792284 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.785 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.332 00:13:27.332 real 0m12.334s 00:13:27.332 user 0m13.755s 00:13:27.332 sys 0m6.282s 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:27.332 ************************************ 00:13:27.332 END TEST nvmf_bdevio 00:13:27.332 ************************************ 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:27.332 00:13:27.332 real 5m4.928s 00:13:27.332 user 11m49.175s 00:13:27.332 sys 1m52.310s 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:27.332 ************************************ 00:13:27.332 END TEST nvmf_target_core 00:13:27.332 ************************************ 00:13:27.332 12:57:29 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:27.332 12:57:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.332 12:57:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.332 12:57:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.332 ************************************ 00:13:27.332 START TEST nvmf_target_extra 00:13:27.332 ************************************ 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:27.332 * Looking for test storage... 00:13:27.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.332 --rc genhtml_branch_coverage=1 00:13:27.332 --rc genhtml_function_coverage=1 00:13:27.332 --rc genhtml_legend=1 00:13:27.332 --rc geninfo_all_blocks=1 00:13:27.332 --rc geninfo_unexecuted_blocks=1 00:13:27.332 00:13:27.332 ' 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.332 --rc genhtml_branch_coverage=1 00:13:27.332 --rc genhtml_function_coverage=1 00:13:27.332 --rc genhtml_legend=1 00:13:27.332 --rc geninfo_all_blocks=1 00:13:27.332 --rc geninfo_unexecuted_blocks=1 00:13:27.332 00:13:27.332 ' 00:13:27.332 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.333 --rc genhtml_branch_coverage=1 00:13:27.333 --rc genhtml_function_coverage=1 00:13:27.333 --rc genhtml_legend=1 00:13:27.333 --rc geninfo_all_blocks=1 00:13:27.333 --rc geninfo_unexecuted_blocks=1 00:13:27.333 00:13:27.333 ' 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.333 --rc genhtml_branch_coverage=1 00:13:27.333 --rc genhtml_function_coverage=1 00:13:27.333 --rc genhtml_legend=1 00:13:27.333 --rc geninfo_all_blocks=1 00:13:27.333 --rc geninfo_unexecuted_blocks=1 00:13:27.333 00:13:27.333 ' 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.333 ************************************ 00:13:27.333 START TEST nvmf_example 00:13:27.333 ************************************ 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:27.333 * Looking for test storage... 00:13:27.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.333 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:27.334 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:27.334 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.334 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.334 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:27.334 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:27.334 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.334 12:57:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:27.334 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.595 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.595 --rc genhtml_branch_coverage=1 00:13:27.595 --rc genhtml_function_coverage=1 00:13:27.595 --rc genhtml_legend=1 00:13:27.595 --rc geninfo_all_blocks=1 00:13:27.595 --rc geninfo_unexecuted_blocks=1 00:13:27.595 00:13:27.595 ' 00:13:27.595 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.595 --rc genhtml_branch_coverage=1 00:13:27.595 --rc genhtml_function_coverage=1 00:13:27.595 --rc genhtml_legend=1 00:13:27.595 --rc geninfo_all_blocks=1 00:13:27.595 --rc geninfo_unexecuted_blocks=1 00:13:27.596 00:13:27.596 ' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.596 --rc genhtml_branch_coverage=1 00:13:27.596 --rc genhtml_function_coverage=1 00:13:27.596 --rc genhtml_legend=1 00:13:27.596 --rc geninfo_all_blocks=1 00:13:27.596 --rc geninfo_unexecuted_blocks=1 00:13:27.596 00:13:27.596 ' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.596 --rc genhtml_branch_coverage=1 00:13:27.596 --rc genhtml_function_coverage=1 00:13:27.596 --rc genhtml_legend=1 00:13:27.596 --rc geninfo_all_blocks=1 00:13:27.596 --rc geninfo_unexecuted_blocks=1 00:13:27.596 00:13:27.596 ' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.596 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:35.741 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:35.741 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:35.741 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:35.741 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:35.741 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:35.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:13:35.742 00:13:35.742 --- 10.0.0.2 ping statistics --- 00:13:35.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.742 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:13:35.742 00:13:35.742 --- 10.0.0.1 ping statistics --- 00:13:35.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.742 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=797224 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 797224 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 797224 ']' 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.742 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:36.002 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:48.234 Initializing NVMe Controllers 00:13:48.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:48.234 Initialization complete. Launching workers. 00:13:48.234 ======================================================== 00:13:48.234 Latency(us) 00:13:48.234 Device Information : IOPS MiB/s Average min max 00:13:48.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18617.24 72.72 3438.51 612.49 15410.42 00:13:48.234 ======================================================== 00:13:48.234 Total : 18617.24 72.72 3438.51 612.49 15410.42 00:13:48.234 00:13:48.234 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:48.234 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:48.234 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:48.234 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:48.234 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.234 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.235 rmmod nvme_tcp 00:13:48.235 rmmod nvme_fabrics 00:13:48.235 rmmod nvme_keyring 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 797224 ']' 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 797224 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 797224 ']' 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 797224 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 797224 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 797224' 00:13:48.235 killing process with pid 797224 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 797224 00:13:48.235 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 797224 00:13:48.235 nvmf threads initialize successfully 00:13:48.235 bdev subsystem init successfully 00:13:48.235 created a nvmf target service 00:13:48.235 create targets's poll groups done 00:13:48.235 all subsystems of target started 00:13:48.235 nvmf target is running 00:13:48.235 all subsystems of target stopped 00:13:48.235 destroy targets's poll groups done 00:13:48.235 destroyed the nvmf target service 00:13:48.235 bdev subsystem finish successfully 00:13:48.235 nvmf threads destroy successfully 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.235 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.495 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:48.495 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:48.495 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:48.495 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:48.756 00:13:48.756 real 0m21.404s 00:13:48.756 user 0m46.477s 00:13:48.756 sys 0m6.925s 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:48.756 ************************************ 00:13:48.756 END TEST nvmf_example 00:13:48.756 ************************************ 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.756 ************************************ 00:13:48.756 START TEST nvmf_filesystem 00:13:48.756 ************************************ 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:48.756 * Looking for test storage... 00:13:48.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:13:48.756 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:49.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.030 --rc genhtml_branch_coverage=1 00:13:49.030 --rc genhtml_function_coverage=1 00:13:49.030 --rc genhtml_legend=1 00:13:49.030 --rc geninfo_all_blocks=1 00:13:49.030 --rc geninfo_unexecuted_blocks=1 00:13:49.030 00:13:49.030 ' 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:49.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.030 --rc genhtml_branch_coverage=1 00:13:49.030 --rc genhtml_function_coverage=1 00:13:49.030 --rc genhtml_legend=1 00:13:49.030 --rc geninfo_all_blocks=1 00:13:49.030 --rc geninfo_unexecuted_blocks=1 00:13:49.030 00:13:49.030 ' 00:13:49.030 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:49.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.030 --rc genhtml_branch_coverage=1 00:13:49.030 --rc genhtml_function_coverage=1 00:13:49.030 --rc genhtml_legend=1 00:13:49.030 --rc geninfo_all_blocks=1 00:13:49.030 --rc geninfo_unexecuted_blocks=1 00:13:49.030 00:13:49.030 ' 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:49.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.031 --rc genhtml_branch_coverage=1 00:13:49.031 --rc genhtml_function_coverage=1 00:13:49.031 --rc genhtml_legend=1 00:13:49.031 --rc geninfo_all_blocks=1 00:13:49.031 --rc geninfo_unexecuted_blocks=1 00:13:49.031 00:13:49.031 ' 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:49.031 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:49.032 #define SPDK_CONFIG_H 00:13:49.032 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:49.032 #define SPDK_CONFIG_APPS 1 00:13:49.032 #define SPDK_CONFIG_ARCH native 00:13:49.032 #undef SPDK_CONFIG_ASAN 00:13:49.032 #undef SPDK_CONFIG_AVAHI 00:13:49.032 #undef SPDK_CONFIG_CET 00:13:49.032 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:49.032 #define SPDK_CONFIG_COVERAGE 1 00:13:49.032 #define SPDK_CONFIG_CROSS_PREFIX 00:13:49.032 #undef SPDK_CONFIG_CRYPTO 00:13:49.032 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:49.032 #undef SPDK_CONFIG_CUSTOMOCF 00:13:49.032 #undef SPDK_CONFIG_DAOS 00:13:49.032 #define SPDK_CONFIG_DAOS_DIR 00:13:49.032 #define SPDK_CONFIG_DEBUG 1 00:13:49.032 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:49.032 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:49.032 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:49.032 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:49.032 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:49.032 #undef SPDK_CONFIG_DPDK_UADK 00:13:49.032 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:49.032 #define SPDK_CONFIG_EXAMPLES 1 00:13:49.032 #undef SPDK_CONFIG_FC 00:13:49.032 #define SPDK_CONFIG_FC_PATH 00:13:49.032 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:49.032 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:49.032 #define SPDK_CONFIG_FSDEV 1 00:13:49.032 #undef SPDK_CONFIG_FUSE 00:13:49.032 #undef SPDK_CONFIG_FUZZER 00:13:49.032 #define SPDK_CONFIG_FUZZER_LIB 00:13:49.032 #undef SPDK_CONFIG_GOLANG 00:13:49.032 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:49.032 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:49.032 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:49.032 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:49.032 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:49.032 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:49.032 #undef SPDK_CONFIG_HAVE_LZ4 00:13:49.032 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:49.032 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:49.032 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:49.032 #define SPDK_CONFIG_IDXD 1 00:13:49.032 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:49.032 #undef SPDK_CONFIG_IPSEC_MB 00:13:49.032 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:49.032 #define SPDK_CONFIG_ISAL 1 00:13:49.032 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:49.032 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:49.032 #define SPDK_CONFIG_LIBDIR 00:13:49.032 #undef SPDK_CONFIG_LTO 00:13:49.032 #define SPDK_CONFIG_MAX_LCORES 128 00:13:49.032 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:49.032 #define SPDK_CONFIG_NVME_CUSE 1 00:13:49.032 #undef SPDK_CONFIG_OCF 00:13:49.032 #define SPDK_CONFIG_OCF_PATH 00:13:49.032 #define SPDK_CONFIG_OPENSSL_PATH 00:13:49.032 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:49.032 #define SPDK_CONFIG_PGO_DIR 00:13:49.032 #undef SPDK_CONFIG_PGO_USE 00:13:49.032 #define SPDK_CONFIG_PREFIX /usr/local 00:13:49.032 #undef SPDK_CONFIG_RAID5F 00:13:49.032 #undef SPDK_CONFIG_RBD 00:13:49.032 #define SPDK_CONFIG_RDMA 1 00:13:49.032 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:49.032 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:49.032 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:49.032 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:49.032 #define SPDK_CONFIG_SHARED 1 00:13:49.032 #undef SPDK_CONFIG_SMA 00:13:49.032 #define SPDK_CONFIG_TESTS 1 00:13:49.032 #undef SPDK_CONFIG_TSAN 00:13:49.032 #define SPDK_CONFIG_UBLK 1 00:13:49.032 #define SPDK_CONFIG_UBSAN 1 00:13:49.032 #undef SPDK_CONFIG_UNIT_TESTS 00:13:49.032 #undef SPDK_CONFIG_URING 00:13:49.032 #define SPDK_CONFIG_URING_PATH 00:13:49.032 #undef SPDK_CONFIG_URING_ZNS 00:13:49.032 #undef SPDK_CONFIG_USDT 00:13:49.032 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:49.032 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:49.032 #define SPDK_CONFIG_VFIO_USER 1 00:13:49.032 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:49.032 #define SPDK_CONFIG_VHOST 1 00:13:49.032 #define SPDK_CONFIG_VIRTIO 1 00:13:49.032 #undef SPDK_CONFIG_VTUNE 00:13:49.032 #define SPDK_CONFIG_VTUNE_DIR 00:13:49.032 #define SPDK_CONFIG_WERROR 1 00:13:49.032 #define SPDK_CONFIG_WPDK_DIR 00:13:49.032 #undef SPDK_CONFIG_XNVME 00:13:49.032 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:49.032 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:49.033 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:49.034 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 800021 ]] 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 800021 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.yg8Hjq 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yg8Hjq/tests/target /tmp/spdk.yg8Hjq 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118248300544 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11108208640 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:49.035 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64676990976 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1265664 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:49.036 * Looking for test storage... 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118248300544 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13322801152 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:13:49.036 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.297 --rc genhtml_branch_coverage=1 00:13:49.297 --rc genhtml_function_coverage=1 00:13:49.297 --rc genhtml_legend=1 00:13:49.297 --rc geninfo_all_blocks=1 00:13:49.297 --rc geninfo_unexecuted_blocks=1 00:13:49.297 00:13:49.297 ' 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.297 --rc genhtml_branch_coverage=1 00:13:49.297 --rc genhtml_function_coverage=1 00:13:49.297 --rc genhtml_legend=1 00:13:49.297 --rc geninfo_all_blocks=1 00:13:49.297 --rc geninfo_unexecuted_blocks=1 00:13:49.297 00:13:49.297 ' 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.297 --rc genhtml_branch_coverage=1 00:13:49.297 --rc genhtml_function_coverage=1 00:13:49.297 --rc genhtml_legend=1 00:13:49.297 --rc geninfo_all_blocks=1 00:13:49.297 --rc geninfo_unexecuted_blocks=1 00:13:49.297 00:13:49.297 ' 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.297 --rc genhtml_branch_coverage=1 00:13:49.297 --rc genhtml_function_coverage=1 00:13:49.297 --rc genhtml_legend=1 00:13:49.297 --rc geninfo_all_blocks=1 00:13:49.297 --rc geninfo_unexecuted_blocks=1 00:13:49.297 00:13:49.297 ' 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.297 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:49.298 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:57.435 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:57.435 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:57.435 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:57.435 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.435 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.436 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:57.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:13:57.436 00:13:57.436 --- 10.0.0.2 ping statistics --- 00:13:57.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.436 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:13:57.436 00:13:57.436 --- 10.0.0.1 ping statistics --- 00:13:57.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.436 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:57.436 ************************************ 00:13:57.436 START TEST nvmf_filesystem_no_in_capsule 00:13:57.436 ************************************ 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=803776 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 803776 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 803776 ']' 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.436 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.436 [2024-11-29 12:57:59.384754] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:13:57.436 [2024-11-29 12:57:59.384819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.436 [2024-11-29 12:57:59.485343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.436 [2024-11-29 12:57:59.538726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.436 [2024-11-29 12:57:59.538781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.436 [2024-11-29 12:57:59.538790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.436 [2024-11-29 12:57:59.538797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.436 [2024-11-29 12:57:59.538803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.436 [2024-11-29 12:57:59.540882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.436 [2024-11-29 12:57:59.541042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.436 [2024-11-29 12:57:59.541242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.436 [2024-11-29 12:57:59.541267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.697 [2024-11-29 12:58:00.267530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.697 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.958 Malloc1 00:13:57.958 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.958 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:57.958 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.958 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.958 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.958 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:57.958 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.958 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.959 [2024-11-29 12:58:00.430527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:57.959 { 00:13:57.959 "name": "Malloc1", 00:13:57.959 "aliases": [ 00:13:57.959 "edb872a7-28d7-40d4-b0d7-c1bdbddd62eb" 00:13:57.959 ], 00:13:57.959 "product_name": "Malloc disk", 00:13:57.959 "block_size": 512, 00:13:57.959 "num_blocks": 1048576, 00:13:57.959 "uuid": "edb872a7-28d7-40d4-b0d7-c1bdbddd62eb", 00:13:57.959 "assigned_rate_limits": { 00:13:57.959 "rw_ios_per_sec": 0, 00:13:57.959 "rw_mbytes_per_sec": 0, 00:13:57.959 "r_mbytes_per_sec": 0, 00:13:57.959 "w_mbytes_per_sec": 0 00:13:57.959 }, 00:13:57.959 "claimed": true, 00:13:57.959 "claim_type": "exclusive_write", 00:13:57.959 "zoned": false, 00:13:57.959 "supported_io_types": { 00:13:57.959 "read": true, 00:13:57.959 "write": true, 00:13:57.959 "unmap": true, 00:13:57.959 "flush": true, 00:13:57.959 "reset": true, 00:13:57.959 "nvme_admin": false, 00:13:57.959 "nvme_io": false, 00:13:57.959 "nvme_io_md": false, 00:13:57.959 "write_zeroes": true, 00:13:57.959 "zcopy": true, 00:13:57.959 "get_zone_info": false, 00:13:57.959 "zone_management": false, 00:13:57.959 "zone_append": false, 00:13:57.959 "compare": false, 00:13:57.959 "compare_and_write": false, 00:13:57.959 "abort": true, 00:13:57.959 "seek_hole": false, 00:13:57.959 "seek_data": false, 00:13:57.959 "copy": true, 00:13:57.959 "nvme_iov_md": false 00:13:57.959 }, 00:13:57.959 "memory_domains": [ 00:13:57.959 { 00:13:57.959 "dma_device_id": "system", 00:13:57.959 "dma_device_type": 1 00:13:57.959 }, 00:13:57.959 { 00:13:57.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.959 "dma_device_type": 2 00:13:57.959 } 00:13:57.959 ], 00:13:57.959 "driver_specific": {} 00:13:57.959 } 00:13:57.959 ]' 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:57.959 12:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:59.873 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.873 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:59.873 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.873 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:59.873 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:01.784 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:02.044 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:02.303 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 ************************************ 00:14:03.247 START TEST filesystem_ext4 00:14:03.247 ************************************ 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:14:03.247 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:03.247 mke2fs 1.47.0 (5-Feb-2023) 00:14:03.509 Discarding device blocks: 0/522240 done 00:14:03.509 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:03.509 Filesystem UUID: 7308d9d4-796f-4e24-b37f-c8dc841b53f6 00:14:03.509 Superblock backups stored on blocks: 00:14:03.509 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:03.509 00:14:03.509 Allocating group tables: 0/64 done 00:14:03.509 Writing inode tables: 0/64 done 00:14:03.769 Creating journal (8192 blocks): done 00:14:06.006 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:14:06.006 00:14:06.006 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:06.006 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 803776 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:12.582 00:14:12.582 real 0m8.880s 00:14:12.582 user 0m0.038s 00:14:12.582 sys 0m0.070s 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:12.582 ************************************ 00:14:12.582 END TEST filesystem_ext4 00:14:12.582 ************************************ 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:12.582 ************************************ 00:14:12.582 START TEST filesystem_btrfs 00:14:12.582 ************************************ 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:12.582 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:12.582 btrfs-progs v6.8.1 00:14:12.582 See https://btrfs.readthedocs.io for more information. 00:14:12.582 00:14:12.582 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:12.582 NOTE: several default settings have changed in version 5.15, please make sure 00:14:12.582 this does not affect your deployments: 00:14:12.582 - DUP for metadata (-m dup) 00:14:12.582 - enabled no-holes (-O no-holes) 00:14:12.582 - enabled free-space-tree (-R free-space-tree) 00:14:12.582 00:14:12.582 Label: (null) 00:14:12.582 UUID: edf98aad-f353-4798-bb65-7c8d518d2b47 00:14:12.582 Node size: 16384 00:14:12.582 Sector size: 4096 (CPU page size: 4096) 00:14:12.582 Filesystem size: 510.00MiB 00:14:12.582 Block group profiles: 00:14:12.582 Data: single 8.00MiB 00:14:12.582 Metadata: DUP 32.00MiB 00:14:12.582 System: DUP 8.00MiB 00:14:12.582 SSD detected: yes 00:14:12.582 Zoned device: no 00:14:12.582 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:12.582 Checksum: crc32c 00:14:12.582 Number of devices: 1 00:14:12.582 Devices: 00:14:12.582 ID SIZE PATH 00:14:12.582 1 510.00MiB /dev/nvme0n1p1 00:14:12.582 00:14:12.582 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:12.582 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 803776 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:13.152 00:14:13.152 real 0m0.789s 00:14:13.152 user 0m0.024s 00:14:13.152 sys 0m0.128s 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:13.152 ************************************ 00:14:13.152 END TEST filesystem_btrfs 00:14:13.152 ************************************ 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.152 ************************************ 00:14:13.152 START TEST filesystem_xfs 00:14:13.152 ************************************ 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:13.152 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:13.152 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:13.152 = sectsz=512 attr=2, projid32bit=1 00:14:13.152 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:13.152 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:13.152 data = bsize=4096 blocks=130560, imaxpct=25 00:14:13.152 = sunit=0 swidth=0 blks 00:14:13.152 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:13.152 log =internal log bsize=4096 blocks=16384, version=2 00:14:13.152 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:13.152 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:14.534 Discarding blocks...Done. 00:14:14.534 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:14.534 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:16.440 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 803776 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:16.440 00:14:16.440 real 0m3.345s 00:14:16.440 user 0m0.036s 00:14:16.440 sys 0m0.069s 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.440 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:16.440 ************************************ 00:14:16.440 END TEST filesystem_xfs 00:14:16.440 ************************************ 00:14:16.700 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 803776 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 803776 ']' 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 803776 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.959 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 803776 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 803776' 00:14:17.219 killing process with pid 803776 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 803776 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 803776 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:17.219 00:14:17.219 real 0m20.532s 00:14:17.219 user 1m21.073s 00:14:17.219 sys 0m1.549s 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.219 ************************************ 00:14:17.219 END TEST nvmf_filesystem_no_in_capsule 00:14:17.219 ************************************ 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.219 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.479 ************************************ 00:14:17.479 START TEST nvmf_filesystem_in_capsule 00:14:17.479 ************************************ 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=808031 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 808031 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 808031 ']' 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.479 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.479 [2024-11-29 12:58:20.004281] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:14:17.479 [2024-11-29 12:58:20.004340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.479 [2024-11-29 12:58:20.100964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.479 [2024-11-29 12:58:20.134791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.479 [2024-11-29 12:58:20.134824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.479 [2024-11-29 12:58:20.134830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.479 [2024-11-29 12:58:20.134835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.479 [2024-11-29 12:58:20.134839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.479 [2024-11-29 12:58:20.136252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.480 [2024-11-29 12:58:20.136503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.480 [2024-11-29 12:58:20.136656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.480 [2024-11-29 12:58:20.136657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.420 [2024-11-29 12:58:20.848373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.420 Malloc1 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.420 [2024-11-29 12:58:20.981857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.420 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.420 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.420 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:18.420 { 00:14:18.420 "name": "Malloc1", 00:14:18.420 "aliases": [ 00:14:18.420 "d1aec3c2-9bbf-499e-a481-e0ac2bf514a4" 00:14:18.420 ], 00:14:18.420 "product_name": "Malloc disk", 00:14:18.420 "block_size": 512, 00:14:18.420 "num_blocks": 1048576, 00:14:18.420 "uuid": "d1aec3c2-9bbf-499e-a481-e0ac2bf514a4", 00:14:18.420 "assigned_rate_limits": { 00:14:18.420 "rw_ios_per_sec": 0, 00:14:18.420 "rw_mbytes_per_sec": 0, 00:14:18.420 "r_mbytes_per_sec": 0, 00:14:18.420 "w_mbytes_per_sec": 0 00:14:18.420 }, 00:14:18.420 "claimed": true, 00:14:18.420 "claim_type": "exclusive_write", 00:14:18.420 "zoned": false, 00:14:18.420 "supported_io_types": { 00:14:18.420 "read": true, 00:14:18.420 "write": true, 00:14:18.420 "unmap": true, 00:14:18.420 "flush": true, 00:14:18.420 "reset": true, 00:14:18.420 "nvme_admin": false, 00:14:18.420 "nvme_io": false, 00:14:18.420 "nvme_io_md": false, 00:14:18.420 "write_zeroes": true, 00:14:18.420 "zcopy": true, 00:14:18.420 "get_zone_info": false, 00:14:18.420 "zone_management": false, 00:14:18.420 "zone_append": false, 00:14:18.420 "compare": false, 00:14:18.420 "compare_and_write": false, 00:14:18.420 "abort": true, 00:14:18.420 "seek_hole": false, 00:14:18.420 "seek_data": false, 00:14:18.420 "copy": true, 00:14:18.420 "nvme_iov_md": false 00:14:18.420 }, 00:14:18.420 "memory_domains": [ 00:14:18.420 { 00:14:18.420 "dma_device_id": "system", 00:14:18.420 "dma_device_type": 1 00:14:18.420 }, 00:14:18.420 { 00:14:18.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.420 "dma_device_type": 2 00:14:18.420 } 00:14:18.420 ], 00:14:18.420 "driver_specific": {} 00:14:18.420 } 00:14:18.420 ]' 00:14:18.420 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:18.420 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:14:18.420 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:18.680 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:14:18.680 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:14:18.680 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:14:18.680 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:18.680 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.066 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.066 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:14:20.066 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.066 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:20.066 12:58:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:22.608 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:22.608 12:58:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:23.550 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:23.550 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:23.550 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:23.550 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.550 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.813 ************************************ 00:14:23.813 START TEST filesystem_in_capsule_ext4 00:14:23.813 ************************************ 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:14:23.813 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:23.813 mke2fs 1.47.0 (5-Feb-2023) 00:14:23.813 Discarding device blocks: 0/522240 done 00:14:23.813 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:23.813 Filesystem UUID: 9837a3db-a3c0-4c74-a586-293bc10049f7 00:14:23.813 Superblock backups stored on blocks: 00:14:23.813 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:23.813 00:14:23.813 Allocating group tables: 0/64 done 00:14:23.813 Writing inode tables: 0/64 done 00:14:24.755 Creating journal (8192 blocks): done 00:14:24.756 Writing superblocks and filesystem accounting information: 0/64 done 00:14:24.756 00:14:24.756 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:14:24.756 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:30.040 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 808031 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:30.301 00:14:30.301 real 0m6.544s 00:14:30.301 user 0m0.034s 00:14:30.301 sys 0m0.072s 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:30.301 ************************************ 00:14:30.301 END TEST filesystem_in_capsule_ext4 00:14:30.301 ************************************ 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.301 ************************************ 00:14:30.301 START TEST filesystem_in_capsule_btrfs 00:14:30.301 ************************************ 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:14:30.301 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:30.562 btrfs-progs v6.8.1 00:14:30.562 See https://btrfs.readthedocs.io for more information. 00:14:30.562 00:14:30.562 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:30.562 NOTE: several default settings have changed in version 5.15, please make sure 00:14:30.562 this does not affect your deployments: 00:14:30.562 - DUP for metadata (-m dup) 00:14:30.562 - enabled no-holes (-O no-holes) 00:14:30.562 - enabled free-space-tree (-R free-space-tree) 00:14:30.562 00:14:30.562 Label: (null) 00:14:30.562 UUID: 77dcb491-1662-4e07-8313-802df875c685 00:14:30.562 Node size: 16384 00:14:30.562 Sector size: 4096 (CPU page size: 4096) 00:14:30.562 Filesystem size: 510.00MiB 00:14:30.562 Block group profiles: 00:14:30.562 Data: single 8.00MiB 00:14:30.562 Metadata: DUP 32.00MiB 00:14:30.562 System: DUP 8.00MiB 00:14:30.562 SSD detected: yes 00:14:30.562 Zoned device: no 00:14:30.562 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:30.562 Checksum: crc32c 00:14:30.562 Number of devices: 1 00:14:30.562 Devices: 00:14:30.562 ID SIZE PATH 00:14:30.562 1 510.00MiB /dev/nvme0n1p1 00:14:30.562 00:14:30.562 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:14:30.562 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:30.562 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:30.562 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:30.562 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 808031 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:30.823 00:14:30.823 real 0m0.428s 00:14:30.823 user 0m0.020s 00:14:30.823 sys 0m0.126s 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:30.823 ************************************ 00:14:30.823 END TEST filesystem_in_capsule_btrfs 00:14:30.823 ************************************ 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.823 ************************************ 00:14:30.823 START TEST filesystem_in_capsule_xfs 00:14:30.823 ************************************ 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:14:30.823 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:30.823 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:30.823 = sectsz=512 attr=2, projid32bit=1 00:14:30.823 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:30.823 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:30.823 data = bsize=4096 blocks=130560, imaxpct=25 00:14:30.823 = sunit=0 swidth=0 blks 00:14:30.823 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:30.823 log =internal log bsize=4096 blocks=16384, version=2 00:14:30.823 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:30.823 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:31.766 Discarding blocks...Done. 00:14:31.766 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:14:31.766 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:33.677 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:33.677 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:33.677 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:33.677 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:33.677 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:33.677 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:33.678 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 808031 00:14:33.678 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:33.678 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:33.678 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:33.678 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:33.678 00:14:33.678 real 0m2.719s 00:14:33.678 user 0m0.029s 00:14:33.678 sys 0m0.077s 00:14:33.678 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.678 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:33.678 ************************************ 00:14:33.678 END TEST filesystem_in_capsule_xfs 00:14:33.678 ************************************ 00:14:33.678 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 808031 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 808031 ']' 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 808031 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.938 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 808031 00:14:34.200 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.200 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.200 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 808031' 00:14:34.200 killing process with pid 808031 00:14:34.200 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 808031 00:14:34.200 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 808031 00:14:34.200 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:34.200 00:14:34.200 real 0m16.939s 00:14:34.200 user 1m6.941s 00:14:34.200 sys 0m1.367s 00:14:34.200 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.200 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:34.200 ************************************ 00:14:34.200 END TEST nvmf_filesystem_in_capsule 00:14:34.200 ************************************ 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.461 rmmod nvme_tcp 00:14:34.461 rmmod nvme_fabrics 00:14:34.461 rmmod nvme_keyring 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.461 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.442 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:36.442 00:14:36.442 real 0m47.765s 00:14:36.442 user 2m30.434s 00:14:36.442 sys 0m8.757s 00:14:36.442 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.442 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.442 ************************************ 00:14:36.442 END TEST nvmf_filesystem 00:14:36.442 ************************************ 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.755 ************************************ 00:14:36.755 START TEST nvmf_target_discovery 00:14:36.755 ************************************ 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:36.755 * Looking for test storage... 00:14:36.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:36.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.755 --rc genhtml_branch_coverage=1 00:14:36.755 --rc genhtml_function_coverage=1 00:14:36.755 --rc genhtml_legend=1 00:14:36.755 --rc geninfo_all_blocks=1 00:14:36.755 --rc geninfo_unexecuted_blocks=1 00:14:36.755 00:14:36.755 ' 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:36.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.755 --rc genhtml_branch_coverage=1 00:14:36.755 --rc genhtml_function_coverage=1 00:14:36.755 --rc genhtml_legend=1 00:14:36.755 --rc geninfo_all_blocks=1 00:14:36.755 --rc geninfo_unexecuted_blocks=1 00:14:36.755 00:14:36.755 ' 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:36.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.755 --rc genhtml_branch_coverage=1 00:14:36.755 --rc genhtml_function_coverage=1 00:14:36.755 --rc genhtml_legend=1 00:14:36.755 --rc geninfo_all_blocks=1 00:14:36.755 --rc geninfo_unexecuted_blocks=1 00:14:36.755 00:14:36.755 ' 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:36.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.755 --rc genhtml_branch_coverage=1 00:14:36.755 --rc genhtml_function_coverage=1 00:14:36.755 --rc genhtml_legend=1 00:14:36.755 --rc geninfo_all_blocks=1 00:14:36.755 --rc geninfo_unexecuted_blocks=1 00:14:36.755 00:14:36.755 ' 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.755 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:36.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:14:36.756 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.970 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.970 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:44.970 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:44.970 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:44.970 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:44.970 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:44.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:44.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:44.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:44.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:44.971 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:44.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:14:44.971 00:14:44.971 --- 10.0.0.2 ping statistics --- 00:14:44.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.972 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:14:44.972 00:14:44.972 --- 10.0.0.1 ping statistics --- 00:14:44.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.972 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=815757 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 815757 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 815757 ']' 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.972 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.972 [2024-11-29 12:58:46.957933] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:14:44.972 [2024-11-29 12:58:46.958001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.972 [2024-11-29 12:58:47.059171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.972 [2024-11-29 12:58:47.112665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.972 [2024-11-29 12:58:47.112720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.972 [2024-11-29 12:58:47.112729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.972 [2024-11-29 12:58:47.112736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.972 [2024-11-29 12:58:47.112742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.972 [2024-11-29 12:58:47.115150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.972 [2024-11-29 12:58:47.115313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.972 [2024-11-29 12:58:47.115518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.972 [2024-11-29 12:58:47.115518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.233 [2024-11-29 12:58:47.833802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.233 Null1 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.233 [2024-11-29 12:58:47.901435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.233 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 Null2 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 Null3 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.495 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.496 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.496 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:45.496 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.496 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.496 Null4 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.496 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:45.757 00:14:45.757 Discovery Log Number of Records 6, Generation counter 6 00:14:45.757 =====Discovery Log Entry 0====== 00:14:45.757 trtype: tcp 00:14:45.757 adrfam: ipv4 00:14:45.757 subtype: current discovery subsystem 00:14:45.757 treq: not required 00:14:45.757 portid: 0 00:14:45.757 trsvcid: 4420 00:14:45.757 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:45.757 traddr: 10.0.0.2 00:14:45.757 eflags: explicit discovery connections, duplicate discovery information 00:14:45.757 sectype: none 00:14:45.757 =====Discovery Log Entry 1====== 00:14:45.757 trtype: tcp 00:14:45.757 adrfam: ipv4 00:14:45.757 subtype: nvme subsystem 00:14:45.757 treq: not required 00:14:45.757 portid: 0 00:14:45.757 trsvcid: 4420 00:14:45.757 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:45.757 traddr: 10.0.0.2 00:14:45.757 eflags: none 00:14:45.757 sectype: none 00:14:45.757 =====Discovery Log Entry 2====== 00:14:45.757 trtype: tcp 00:14:45.757 adrfam: ipv4 00:14:45.757 subtype: nvme subsystem 00:14:45.757 treq: not required 00:14:45.757 portid: 0 00:14:45.757 trsvcid: 4420 00:14:45.757 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:45.757 traddr: 10.0.0.2 00:14:45.757 eflags: none 00:14:45.757 sectype: none 00:14:45.757 =====Discovery Log Entry 3====== 00:14:45.757 trtype: tcp 00:14:45.757 adrfam: ipv4 00:14:45.757 subtype: nvme subsystem 00:14:45.757 treq: not required 00:14:45.757 portid: 0 00:14:45.757 trsvcid: 4420 00:14:45.757 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:45.757 traddr: 10.0.0.2 00:14:45.757 eflags: none 00:14:45.757 sectype: none 00:14:45.757 =====Discovery Log Entry 4====== 00:14:45.757 trtype: tcp 00:14:45.757 adrfam: ipv4 00:14:45.757 subtype: nvme subsystem 00:14:45.757 treq: not required 00:14:45.757 portid: 0 00:14:45.757 trsvcid: 4420 00:14:45.757 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:45.757 traddr: 10.0.0.2 00:14:45.757 eflags: none 00:14:45.757 sectype: none 00:14:45.757 =====Discovery Log Entry 5====== 00:14:45.757 trtype: tcp 00:14:45.757 adrfam: ipv4 00:14:45.757 subtype: discovery subsystem referral 00:14:45.757 treq: not required 00:14:45.757 portid: 0 00:14:45.757 trsvcid: 4430 00:14:45.757 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:45.757 traddr: 10.0.0.2 00:14:45.757 eflags: none 00:14:45.757 sectype: none 00:14:45.757 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:45.757 Perform nvmf subsystem discovery via RPC 00:14:45.757 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:45.757 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.757 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.757 [ 00:14:45.757 { 00:14:45.757 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:45.757 "subtype": "Discovery", 00:14:45.757 "listen_addresses": [ 00:14:45.757 { 00:14:45.757 "trtype": "TCP", 00:14:45.757 "adrfam": "IPv4", 00:14:45.757 "traddr": "10.0.0.2", 00:14:45.757 "trsvcid": "4420" 00:14:45.757 } 00:14:45.757 ], 00:14:45.757 "allow_any_host": true, 00:14:45.757 "hosts": [] 00:14:45.757 }, 00:14:45.757 { 00:14:45.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.757 "subtype": "NVMe", 00:14:45.757 "listen_addresses": [ 00:14:45.757 { 00:14:45.757 "trtype": "TCP", 00:14:45.757 "adrfam": "IPv4", 00:14:45.757 "traddr": "10.0.0.2", 00:14:45.757 "trsvcid": "4420" 00:14:45.757 } 00:14:45.757 ], 00:14:45.757 "allow_any_host": true, 00:14:45.757 "hosts": [], 00:14:45.757 "serial_number": "SPDK00000000000001", 00:14:45.757 "model_number": "SPDK bdev Controller", 00:14:45.757 "max_namespaces": 32, 00:14:45.757 "min_cntlid": 1, 00:14:45.757 "max_cntlid": 65519, 00:14:45.757 "namespaces": [ 00:14:45.757 { 00:14:45.757 "nsid": 1, 00:14:45.757 "bdev_name": "Null1", 00:14:45.757 "name": "Null1", 00:14:45.757 "nguid": "3DAD82B32EE647F7AF3CF027E81F3610", 00:14:45.757 "uuid": "3dad82b3-2ee6-47f7-af3c-f027e81f3610" 00:14:45.757 } 00:14:45.757 ] 00:14:45.757 }, 00:14:45.757 { 00:14:45.757 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:45.757 "subtype": "NVMe", 00:14:45.757 "listen_addresses": [ 00:14:45.757 { 00:14:45.757 "trtype": "TCP", 00:14:45.757 "adrfam": "IPv4", 00:14:45.757 "traddr": "10.0.0.2", 00:14:45.757 "trsvcid": "4420" 00:14:45.757 } 00:14:45.757 ], 00:14:45.757 "allow_any_host": true, 00:14:45.757 "hosts": [], 00:14:45.757 "serial_number": "SPDK00000000000002", 00:14:45.757 "model_number": "SPDK bdev Controller", 00:14:45.757 "max_namespaces": 32, 00:14:45.757 "min_cntlid": 1, 00:14:45.757 "max_cntlid": 65519, 00:14:45.757 "namespaces": [ 00:14:45.757 { 00:14:45.757 "nsid": 1, 00:14:45.757 "bdev_name": "Null2", 00:14:45.757 "name": "Null2", 00:14:45.757 "nguid": "78418A5716F6482CB2CEFA5F8CD5F7D7", 00:14:45.757 "uuid": "78418a57-16f6-482c-b2ce-fa5f8cd5f7d7" 00:14:45.757 } 00:14:45.757 ] 00:14:45.757 }, 00:14:45.757 { 00:14:45.757 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:45.757 "subtype": "NVMe", 00:14:45.757 "listen_addresses": [ 00:14:45.757 { 00:14:45.757 "trtype": "TCP", 00:14:45.757 "adrfam": "IPv4", 00:14:45.757 "traddr": "10.0.0.2", 00:14:45.757 "trsvcid": "4420" 00:14:45.757 } 00:14:45.757 ], 00:14:45.757 "allow_any_host": true, 00:14:45.757 "hosts": [], 00:14:45.757 "serial_number": "SPDK00000000000003", 00:14:45.757 "model_number": "SPDK bdev Controller", 00:14:45.757 "max_namespaces": 32, 00:14:45.757 "min_cntlid": 1, 00:14:45.757 "max_cntlid": 65519, 00:14:45.757 "namespaces": [ 00:14:45.757 { 00:14:45.757 "nsid": 1, 00:14:45.757 "bdev_name": "Null3", 00:14:45.757 "name": "Null3", 00:14:45.757 "nguid": "78D7E3969E7B450D8FACC04DACC6961D", 00:14:45.757 "uuid": "78d7e396-9e7b-450d-8fac-c04dacc6961d" 00:14:45.757 } 00:14:45.757 ] 00:14:45.757 }, 00:14:45.757 { 00:14:45.757 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:45.758 "subtype": "NVMe", 00:14:45.758 "listen_addresses": [ 00:14:45.758 { 00:14:45.758 "trtype": "TCP", 00:14:45.758 "adrfam": "IPv4", 00:14:45.758 "traddr": "10.0.0.2", 00:14:45.758 "trsvcid": "4420" 00:14:45.758 } 00:14:45.758 ], 00:14:45.758 "allow_any_host": true, 00:14:45.758 "hosts": [], 00:14:45.758 "serial_number": "SPDK00000000000004", 00:14:45.758 "model_number": "SPDK bdev Controller", 00:14:45.758 "max_namespaces": 32, 00:14:45.758 "min_cntlid": 1, 00:14:45.758 "max_cntlid": 65519, 00:14:45.758 "namespaces": [ 00:14:45.758 { 00:14:45.758 "nsid": 1, 00:14:45.758 "bdev_name": "Null4", 00:14:45.758 "name": "Null4", 00:14:45.758 "nguid": "D23D0E5AFA994251B68E9977BB49E779", 00:14:45.758 "uuid": "d23d0e5a-fa99-4251-b68e-9977bb49e779" 00:14:45.758 } 00:14:45.758 ] 00:14:45.758 } 00:14:45.758 ] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.758 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.020 rmmod nvme_tcp 00:14:46.020 rmmod nvme_fabrics 00:14:46.020 rmmod nvme_keyring 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 815757 ']' 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 815757 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 815757 ']' 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 815757 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 815757 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 815757' 00:14:46.020 killing process with pid 815757 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 815757 00:14:46.020 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 815757 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.282 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.193 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:48.454 00:14:48.454 real 0m11.730s 00:14:48.454 user 0m9.079s 00:14:48.454 sys 0m6.086s 00:14:48.454 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.454 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.454 ************************************ 00:14:48.454 END TEST nvmf_target_discovery 00:14:48.454 ************************************ 00:14:48.454 12:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:48.454 12:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:48.454 12:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.454 12:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.454 ************************************ 00:14:48.454 START TEST nvmf_referrals 00:14:48.454 ************************************ 00:14:48.454 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:48.454 * Looking for test storage... 00:14:48.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.454 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:48.454 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:14:48.454 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:48.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.715 --rc genhtml_branch_coverage=1 00:14:48.715 --rc genhtml_function_coverage=1 00:14:48.715 --rc genhtml_legend=1 00:14:48.715 --rc geninfo_all_blocks=1 00:14:48.715 --rc geninfo_unexecuted_blocks=1 00:14:48.715 00:14:48.715 ' 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:48.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.715 --rc genhtml_branch_coverage=1 00:14:48.715 --rc genhtml_function_coverage=1 00:14:48.715 --rc genhtml_legend=1 00:14:48.715 --rc geninfo_all_blocks=1 00:14:48.715 --rc geninfo_unexecuted_blocks=1 00:14:48.715 00:14:48.715 ' 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:48.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.715 --rc genhtml_branch_coverage=1 00:14:48.715 --rc genhtml_function_coverage=1 00:14:48.715 --rc genhtml_legend=1 00:14:48.715 --rc geninfo_all_blocks=1 00:14:48.715 --rc geninfo_unexecuted_blocks=1 00:14:48.715 00:14:48.715 ' 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:48.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.715 --rc genhtml_branch_coverage=1 00:14:48.715 --rc genhtml_function_coverage=1 00:14:48.715 --rc genhtml_legend=1 00:14:48.715 --rc geninfo_all_blocks=1 00:14:48.715 --rc geninfo_unexecuted_blocks=1 00:14:48.715 00:14:48.715 ' 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.715 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:48.716 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:56.856 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.856 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:56.857 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:56.857 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:56.857 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:56.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:14:56.857 00:14:56.857 --- 10.0.0.2 ping statistics --- 00:14:56.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.857 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:14:56.857 00:14:56.857 --- 10.0.0.1 ping statistics --- 00:14:56.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.857 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=820308 00:14:56.857 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 820308 00:14:56.858 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.858 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 820308 ']' 00:14:56.858 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.858 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.858 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.858 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.858 12:58:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.858 [2024-11-29 12:58:58.823217] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:14:56.858 [2024-11-29 12:58:58.823284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.858 [2024-11-29 12:58:58.923402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.858 [2024-11-29 12:58:58.976748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.858 [2024-11-29 12:58:58.976800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.858 [2024-11-29 12:58:58.976808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.858 [2024-11-29 12:58:58.976815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.858 [2024-11-29 12:58:58.976822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.858 [2024-11-29 12:58:58.979215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.858 [2024-11-29 12:58:58.979317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.858 [2024-11-29 12:58:58.979469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.858 [2024-11-29 12:58:58.979470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 [2024-11-29 12:58:59.701702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 [2024-11-29 12:58:59.735474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:57.382 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:57.382 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:57.382 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:57.382 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:57.382 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.382 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.382 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.382 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:57.382 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.382 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:57.645 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:57.907 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:58.169 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.169 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:58.169 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:58.169 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:58.169 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:58.169 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:58.169 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.169 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:58.430 12:59:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.691 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:58.953 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.213 rmmod nvme_tcp 00:14:59.213 rmmod nvme_fabrics 00:14:59.213 rmmod nvme_keyring 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 820308 ']' 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 820308 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 820308 ']' 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 820308 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.213 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 820308 00:14:59.474 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.474 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.474 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 820308' 00:14:59.474 killing process with pid 820308 00:14:59.474 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 820308 00:14:59.474 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 820308 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.474 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.020 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:02.020 00:15:02.020 real 0m13.144s 00:15:02.020 user 0m15.230s 00:15:02.020 sys 0m6.609s 00:15:02.020 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.020 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 ************************************ 00:15:02.020 END TEST nvmf_referrals 00:15:02.020 ************************************ 00:15:02.020 12:59:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:02.020 12:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.020 12:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.020 12:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 ************************************ 00:15:02.020 START TEST nvmf_connect_disconnect 00:15:02.020 ************************************ 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:02.021 * Looking for test storage... 00:15:02.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:02.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.021 --rc genhtml_branch_coverage=1 00:15:02.021 --rc genhtml_function_coverage=1 00:15:02.021 --rc genhtml_legend=1 00:15:02.021 --rc geninfo_all_blocks=1 00:15:02.021 --rc geninfo_unexecuted_blocks=1 00:15:02.021 00:15:02.021 ' 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:02.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.021 --rc genhtml_branch_coverage=1 00:15:02.021 --rc genhtml_function_coverage=1 00:15:02.021 --rc genhtml_legend=1 00:15:02.021 --rc geninfo_all_blocks=1 00:15:02.021 --rc geninfo_unexecuted_blocks=1 00:15:02.021 00:15:02.021 ' 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:02.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.021 --rc genhtml_branch_coverage=1 00:15:02.021 --rc genhtml_function_coverage=1 00:15:02.021 --rc genhtml_legend=1 00:15:02.021 --rc geninfo_all_blocks=1 00:15:02.021 --rc geninfo_unexecuted_blocks=1 00:15:02.021 00:15:02.021 ' 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:02.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.021 --rc genhtml_branch_coverage=1 00:15:02.021 --rc genhtml_function_coverage=1 00:15:02.021 --rc genhtml_legend=1 00:15:02.021 --rc geninfo_all_blocks=1 00:15:02.021 --rc geninfo_unexecuted_blocks=1 00:15:02.021 00:15:02.021 ' 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.021 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:02.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:15:02.022 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:10.168 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:10.168 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.168 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:10.169 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:10.169 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:10.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:15:10.169 00:15:10.169 --- 10.0.0.2 ping statistics --- 00:15:10.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.169 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:15:10.169 00:15:10.169 --- 10.0.0.1 ping statistics --- 00:15:10.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.169 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=825364 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 825364 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 825364 ']' 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.169 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.169 [2024-11-29 12:59:12.038334] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:15:10.169 [2024-11-29 12:59:12.038399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.169 [2024-11-29 12:59:12.137263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.169 [2024-11-29 12:59:12.190288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.169 [2024-11-29 12:59:12.190341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.169 [2024-11-29 12:59:12.190355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.169 [2024-11-29 12:59:12.190362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.169 [2024-11-29 12:59:12.190368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.169 [2024-11-29 12:59:12.192404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.169 [2024-11-29 12:59:12.192568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.169 [2024-11-29 12:59:12.192729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.169 [2024-11-29 12:59:12.192729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.430 [2024-11-29 12:59:12.918346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.430 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.431 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.431 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.431 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:10.431 [2024-11-29 12:59:12.995616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.431 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.431 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:10.431 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:10.431 12:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:14.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:28.851 rmmod nvme_tcp 00:15:28.851 rmmod nvme_fabrics 00:15:28.851 rmmod nvme_keyring 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 825364 ']' 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 825364 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 825364 ']' 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 825364 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825364 00:15:28.851 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825364' 00:15:28.852 killing process with pid 825364 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 825364 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 825364 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.852 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:31.399 00:15:31.399 real 0m29.414s 00:15:31.399 user 1m19.176s 00:15:31.399 sys 0m7.179s 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:31.399 ************************************ 00:15:31.399 END TEST nvmf_connect_disconnect 00:15:31.399 ************************************ 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.399 ************************************ 00:15:31.399 START TEST nvmf_multitarget 00:15:31.399 ************************************ 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:31.399 * Looking for test storage... 00:15:31.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:31.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.399 --rc genhtml_branch_coverage=1 00:15:31.399 --rc genhtml_function_coverage=1 00:15:31.399 --rc genhtml_legend=1 00:15:31.399 --rc geninfo_all_blocks=1 00:15:31.399 --rc geninfo_unexecuted_blocks=1 00:15:31.399 00:15:31.399 ' 00:15:31.399 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:31.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.400 --rc genhtml_branch_coverage=1 00:15:31.400 --rc genhtml_function_coverage=1 00:15:31.400 --rc genhtml_legend=1 00:15:31.400 --rc geninfo_all_blocks=1 00:15:31.400 --rc geninfo_unexecuted_blocks=1 00:15:31.400 00:15:31.400 ' 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:31.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.400 --rc genhtml_branch_coverage=1 00:15:31.400 --rc genhtml_function_coverage=1 00:15:31.400 --rc genhtml_legend=1 00:15:31.400 --rc geninfo_all_blocks=1 00:15:31.400 --rc geninfo_unexecuted_blocks=1 00:15:31.400 00:15:31.400 ' 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:31.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.400 --rc genhtml_branch_coverage=1 00:15:31.400 --rc genhtml_function_coverage=1 00:15:31.400 --rc genhtml_legend=1 00:15:31.400 --rc geninfo_all_blocks=1 00:15:31.400 --rc geninfo_unexecuted_blocks=1 00:15:31.400 00:15:31.400 ' 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:31.400 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:31.401 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.543 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:39.544 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:39.544 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:39.544 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:39.544 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.544 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:39.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:15:39.545 00:15:39.545 --- 10.0.0.2 ping statistics --- 00:15:39.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.545 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:15:39.545 00:15:39.545 --- 10.0.0.1 ping statistics --- 00:15:39.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.545 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=833312 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 833312 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 833312 ']' 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.545 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:39.545 [2024-11-29 12:59:41.445878] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:15:39.545 [2024-11-29 12:59:41.445944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.545 [2024-11-29 12:59:41.545830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.545 [2024-11-29 12:59:41.599504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.545 [2024-11-29 12:59:41.599560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.545 [2024-11-29 12:59:41.599569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.545 [2024-11-29 12:59:41.599577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.545 [2024-11-29 12:59:41.599583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.545 [2024-11-29 12:59:41.601887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.545 [2024-11-29 12:59:41.602051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.545 [2024-11-29 12:59:41.602217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.545 [2024-11-29 12:59:41.602262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:39.807 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:40.069 "nvmf_tgt_1" 00:15:40.069 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:40.069 "nvmf_tgt_2" 00:15:40.069 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:40.069 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:40.330 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:40.330 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:40.330 true 00:15:40.330 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:40.330 true 00:15:40.330 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:40.330 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.591 rmmod nvme_tcp 00:15:40.591 rmmod nvme_fabrics 00:15:40.591 rmmod nvme_keyring 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 833312 ']' 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 833312 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 833312 ']' 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 833312 00:15:40.591 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:15:40.592 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.592 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833312 00:15:40.592 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.592 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.592 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833312' 00:15:40.592 killing process with pid 833312 00:15:40.592 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 833312 00:15:40.592 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 833312 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.852 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:43.399 00:15:43.399 real 0m11.813s 00:15:43.399 user 0m10.216s 00:15:43.399 sys 0m6.146s 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:43.399 ************************************ 00:15:43.399 END TEST nvmf_multitarget 00:15:43.399 ************************************ 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:43.399 ************************************ 00:15:43.399 START TEST nvmf_rpc 00:15:43.399 ************************************ 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:43.399 * Looking for test storage... 00:15:43.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:43.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.399 --rc genhtml_branch_coverage=1 00:15:43.399 --rc genhtml_function_coverage=1 00:15:43.399 --rc genhtml_legend=1 00:15:43.399 --rc geninfo_all_blocks=1 00:15:43.399 --rc geninfo_unexecuted_blocks=1 00:15:43.399 00:15:43.399 ' 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:43.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.399 --rc genhtml_branch_coverage=1 00:15:43.399 --rc genhtml_function_coverage=1 00:15:43.399 --rc genhtml_legend=1 00:15:43.399 --rc geninfo_all_blocks=1 00:15:43.399 --rc geninfo_unexecuted_blocks=1 00:15:43.399 00:15:43.399 ' 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:43.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.399 --rc genhtml_branch_coverage=1 00:15:43.399 --rc genhtml_function_coverage=1 00:15:43.399 --rc genhtml_legend=1 00:15:43.399 --rc geninfo_all_blocks=1 00:15:43.399 --rc geninfo_unexecuted_blocks=1 00:15:43.399 00:15:43.399 ' 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:43.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.399 --rc genhtml_branch_coverage=1 00:15:43.399 --rc genhtml_function_coverage=1 00:15:43.399 --rc genhtml_legend=1 00:15:43.399 --rc geninfo_all_blocks=1 00:15:43.399 --rc geninfo_unexecuted_blocks=1 00:15:43.399 00:15:43.399 ' 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.399 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:43.400 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:51.546 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:51.546 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:51.546 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:51.546 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:51.546 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:51.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:15:51.547 00:15:51.547 --- 10.0.0.2 ping statistics --- 00:15:51.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.547 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:15:51.547 00:15:51.547 --- 10.0.0.1 ping statistics --- 00:15:51.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.547 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=837921 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 837921 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 837921 ']' 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.547 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.547 [2024-11-29 12:59:53.456653] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:15:51.547 [2024-11-29 12:59:53.456720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.547 [2024-11-29 12:59:53.557677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.547 [2024-11-29 12:59:53.610996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.547 [2024-11-29 12:59:53.611054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.547 [2024-11-29 12:59:53.611062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.547 [2024-11-29 12:59:53.611071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.547 [2024-11-29 12:59:53.611078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.547 [2024-11-29 12:59:53.613486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.547 [2024-11-29 12:59:53.613645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.547 [2024-11-29 12:59:53.613806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.547 [2024-11-29 12:59:53.613806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:51.810 "tick_rate": 2400000000, 00:15:51.810 "poll_groups": [ 00:15:51.810 { 00:15:51.810 "name": "nvmf_tgt_poll_group_000", 00:15:51.810 "admin_qpairs": 0, 00:15:51.810 "io_qpairs": 0, 00:15:51.810 "current_admin_qpairs": 0, 00:15:51.810 "current_io_qpairs": 0, 00:15:51.810 "pending_bdev_io": 0, 00:15:51.810 "completed_nvme_io": 0, 00:15:51.810 "transports": [] 00:15:51.810 }, 00:15:51.810 { 00:15:51.810 "name": "nvmf_tgt_poll_group_001", 00:15:51.810 "admin_qpairs": 0, 00:15:51.810 "io_qpairs": 0, 00:15:51.810 "current_admin_qpairs": 0, 00:15:51.810 "current_io_qpairs": 0, 00:15:51.810 "pending_bdev_io": 0, 00:15:51.810 "completed_nvme_io": 0, 00:15:51.810 "transports": [] 00:15:51.810 }, 00:15:51.810 { 00:15:51.810 "name": "nvmf_tgt_poll_group_002", 00:15:51.810 "admin_qpairs": 0, 00:15:51.810 "io_qpairs": 0, 00:15:51.810 "current_admin_qpairs": 0, 00:15:51.810 "current_io_qpairs": 0, 00:15:51.810 "pending_bdev_io": 0, 00:15:51.810 "completed_nvme_io": 0, 00:15:51.810 "transports": [] 00:15:51.810 }, 00:15:51.810 { 00:15:51.810 "name": "nvmf_tgt_poll_group_003", 00:15:51.810 "admin_qpairs": 0, 00:15:51.810 "io_qpairs": 0, 00:15:51.810 "current_admin_qpairs": 0, 00:15:51.810 "current_io_qpairs": 0, 00:15:51.810 "pending_bdev_io": 0, 00:15:51.810 "completed_nvme_io": 0, 00:15:51.810 "transports": [] 00:15:51.810 } 00:15:51.810 ] 00:15:51.810 }' 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.810 [2024-11-29 12:59:54.456829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.810 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:51.810 "tick_rate": 2400000000, 00:15:51.810 "poll_groups": [ 00:15:51.810 { 00:15:51.810 "name": "nvmf_tgt_poll_group_000", 00:15:51.810 "admin_qpairs": 0, 00:15:51.810 "io_qpairs": 0, 00:15:51.810 "current_admin_qpairs": 0, 00:15:51.810 "current_io_qpairs": 0, 00:15:51.810 "pending_bdev_io": 0, 00:15:51.810 "completed_nvme_io": 0, 00:15:51.810 "transports": [ 00:15:51.810 { 00:15:51.810 "trtype": "TCP" 00:15:51.810 } 00:15:51.810 ] 00:15:51.810 }, 00:15:51.810 { 00:15:51.810 "name": "nvmf_tgt_poll_group_001", 00:15:51.810 "admin_qpairs": 0, 00:15:51.810 "io_qpairs": 0, 00:15:51.810 "current_admin_qpairs": 0, 00:15:51.810 "current_io_qpairs": 0, 00:15:51.810 "pending_bdev_io": 0, 00:15:51.810 "completed_nvme_io": 0, 00:15:51.810 "transports": [ 00:15:51.811 { 00:15:51.811 "trtype": "TCP" 00:15:51.811 } 00:15:51.811 ] 00:15:51.811 }, 00:15:51.811 { 00:15:51.811 "name": "nvmf_tgt_poll_group_002", 00:15:51.811 "admin_qpairs": 0, 00:15:51.811 "io_qpairs": 0, 00:15:51.811 "current_admin_qpairs": 0, 00:15:51.811 "current_io_qpairs": 0, 00:15:51.811 "pending_bdev_io": 0, 00:15:51.811 "completed_nvme_io": 0, 00:15:51.811 "transports": [ 00:15:51.811 { 00:15:51.811 "trtype": "TCP" 00:15:51.811 } 00:15:51.811 ] 00:15:51.811 }, 00:15:51.811 { 00:15:51.811 "name": "nvmf_tgt_poll_group_003", 00:15:51.811 "admin_qpairs": 0, 00:15:51.811 "io_qpairs": 0, 00:15:51.811 "current_admin_qpairs": 0, 00:15:51.811 "current_io_qpairs": 0, 00:15:51.811 "pending_bdev_io": 0, 00:15:51.811 "completed_nvme_io": 0, 00:15:51.811 "transports": [ 00:15:51.811 { 00:15:51.811 "trtype": "TCP" 00:15:51.811 } 00:15:51.811 ] 00:15:51.811 } 00:15:51.811 ] 00:15:51.811 }' 00:15:51.811 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:51.811 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:51.811 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.072 Malloc1 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.072 [2024-11-29 12:59:54.642216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:52.072 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:52.073 [2024-11-29 12:59:54.679205] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:52.073 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:52.073 could not add new controller: failed to write to nvme-fabrics device 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.073 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.987 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:53.987 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:53.987 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.987 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:53.987 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:55.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.902 [2024-11-29 12:59:58.434560] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:55.902 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:55.902 could not add new controller: failed to write to nvme-fabrics device 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.902 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.815 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:57.815 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:57.815 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.815 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:57.815 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:59.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:59.725 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.726 [2024-11-29 13:00:02.201067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.726 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:01.633 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:01.633 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:01.633 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.633 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:01.633 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:03.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:03.544 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.544 [2024-11-29 13:00:06.054819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.544 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.931 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:04.931 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:04.931 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.931 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:04.931 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.476 [2024-11-29 13:00:09.777684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.476 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.864 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:08.864 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:08.865 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.865 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:08.865 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:10.776 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:10.776 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:10.776 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.776 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:10.776 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.776 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:10.776 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.037 [2024-11-29 13:00:13.557309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.037 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:12.421 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.421 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.421 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.421 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.421 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.961 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.961 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.961 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.962 [2024-11-29 13:00:17.272576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.962 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.344 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:16.344 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:16.344 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.344 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:16.344 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.256 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.256 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.256 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.256 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.256 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.256 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:18.256 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.517 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.517 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.517 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:18.517 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:18.517 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.517 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.517 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.517 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 [2024-11-29 13:00:21.031566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 [2024-11-29 13:00:21.103736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 [2024-11-29 13:00:21.171912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.518 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 [2024-11-29 13:00:21.244146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 [2024-11-29 13:00:21.312359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.780 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:18.781 "tick_rate": 2400000000, 00:16:18.781 "poll_groups": [ 00:16:18.781 { 00:16:18.781 "name": "nvmf_tgt_poll_group_000", 00:16:18.781 "admin_qpairs": 0, 00:16:18.781 "io_qpairs": 224, 00:16:18.781 "current_admin_qpairs": 0, 00:16:18.781 "current_io_qpairs": 0, 00:16:18.781 "pending_bdev_io": 0, 00:16:18.781 "completed_nvme_io": 276, 00:16:18.781 "transports": [ 00:16:18.781 { 00:16:18.781 "trtype": "TCP" 00:16:18.781 } 00:16:18.781 ] 00:16:18.781 }, 00:16:18.781 { 00:16:18.781 "name": "nvmf_tgt_poll_group_001", 00:16:18.781 "admin_qpairs": 1, 00:16:18.781 "io_qpairs": 223, 00:16:18.781 "current_admin_qpairs": 0, 00:16:18.781 "current_io_qpairs": 0, 00:16:18.781 "pending_bdev_io": 0, 00:16:18.781 "completed_nvme_io": 518, 00:16:18.781 "transports": [ 00:16:18.781 { 00:16:18.781 "trtype": "TCP" 00:16:18.781 } 00:16:18.781 ] 00:16:18.781 }, 00:16:18.781 { 00:16:18.781 "name": "nvmf_tgt_poll_group_002", 00:16:18.781 "admin_qpairs": 6, 00:16:18.781 "io_qpairs": 218, 00:16:18.781 "current_admin_qpairs": 0, 00:16:18.781 "current_io_qpairs": 0, 00:16:18.781 "pending_bdev_io": 0, 00:16:18.781 "completed_nvme_io": 219, 00:16:18.781 "transports": [ 00:16:18.781 { 00:16:18.781 "trtype": "TCP" 00:16:18.781 } 00:16:18.781 ] 00:16:18.781 }, 00:16:18.781 { 00:16:18.781 "name": "nvmf_tgt_poll_group_003", 00:16:18.781 "admin_qpairs": 0, 00:16:18.781 "io_qpairs": 224, 00:16:18.781 "current_admin_qpairs": 0, 00:16:18.781 "current_io_qpairs": 0, 00:16:18.781 "pending_bdev_io": 0, 00:16:18.781 "completed_nvme_io": 226, 00:16:18.781 "transports": [ 00:16:18.781 { 00:16:18.781 "trtype": "TCP" 00:16:18.781 } 00:16:18.781 ] 00:16:18.781 } 00:16:18.781 ] 00:16:18.781 }' 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:18.781 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:19.042 rmmod nvme_tcp 00:16:19.042 rmmod nvme_fabrics 00:16:19.042 rmmod nvme_keyring 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 837921 ']' 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 837921 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 837921 ']' 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 837921 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 837921 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 837921' 00:16:19.042 killing process with pid 837921 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 837921 00:16:19.042 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 837921 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.303 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.216 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:21.216 00:16:21.216 real 0m38.243s 00:16:21.216 user 1m54.370s 00:16:21.216 sys 0m8.043s 00:16:21.216 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.216 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.216 ************************************ 00:16:21.216 END TEST nvmf_rpc 00:16:21.216 ************************************ 00:16:21.216 13:00:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:21.216 13:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:21.216 13:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.216 13:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:21.478 ************************************ 00:16:21.478 START TEST nvmf_invalid 00:16:21.478 ************************************ 00:16:21.478 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:21.478 * Looking for test storage... 00:16:21.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.478 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:21.478 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:21.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.479 --rc genhtml_branch_coverage=1 00:16:21.479 --rc genhtml_function_coverage=1 00:16:21.479 --rc genhtml_legend=1 00:16:21.479 --rc geninfo_all_blocks=1 00:16:21.479 --rc geninfo_unexecuted_blocks=1 00:16:21.479 00:16:21.479 ' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:21.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.479 --rc genhtml_branch_coverage=1 00:16:21.479 --rc genhtml_function_coverage=1 00:16:21.479 --rc genhtml_legend=1 00:16:21.479 --rc geninfo_all_blocks=1 00:16:21.479 --rc geninfo_unexecuted_blocks=1 00:16:21.479 00:16:21.479 ' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:21.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.479 --rc genhtml_branch_coverage=1 00:16:21.479 --rc genhtml_function_coverage=1 00:16:21.479 --rc genhtml_legend=1 00:16:21.479 --rc geninfo_all_blocks=1 00:16:21.479 --rc geninfo_unexecuted_blocks=1 00:16:21.479 00:16:21.479 ' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:21.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.479 --rc genhtml_branch_coverage=1 00:16:21.479 --rc genhtml_function_coverage=1 00:16:21.479 --rc genhtml_legend=1 00:16:21.479 --rc geninfo_all_blocks=1 00:16:21.479 --rc geninfo_unexecuted_blocks=1 00:16:21.479 00:16:21.479 ' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:21.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:21.479 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.619 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.619 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:29.620 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:29.620 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:29.620 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:29.620 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:29.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:16:29.620 00:16:29.620 --- 10.0.0.2 ping statistics --- 00:16:29.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.620 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:29.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:16:29.620 00:16:29.620 --- 10.0.0.1 ping statistics --- 00:16:29.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.620 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:29.620 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=848343 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 848343 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 848343 ']' 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.621 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.621 [2024-11-29 13:00:31.711786] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:16:29.621 [2024-11-29 13:00:31.711852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.621 [2024-11-29 13:00:31.811539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.621 [2024-11-29 13:00:31.865237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.621 [2024-11-29 13:00:31.865288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.621 [2024-11-29 13:00:31.865297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.621 [2024-11-29 13:00:31.865304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.621 [2024-11-29 13:00:31.865310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.621 [2024-11-29 13:00:31.867631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.621 [2024-11-29 13:00:31.867794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.621 [2024-11-29 13:00:31.867931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.621 [2024-11-29 13:00:31.867931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.883 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.883 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:29.883 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.883 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.883 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:30.146 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.146 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:30.146 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18912 00:16:30.146 [2024-11-29 13:00:32.750205] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:30.146 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:30.146 { 00:16:30.146 "nqn": "nqn.2016-06.io.spdk:cnode18912", 00:16:30.146 "tgt_name": "foobar", 00:16:30.146 "method": "nvmf_create_subsystem", 00:16:30.146 "req_id": 1 00:16:30.146 } 00:16:30.146 Got JSON-RPC error response 00:16:30.146 response: 00:16:30.146 { 00:16:30.146 "code": -32603, 00:16:30.146 "message": "Unable to find target foobar" 00:16:30.146 }' 00:16:30.146 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:30.146 { 00:16:30.146 "nqn": "nqn.2016-06.io.spdk:cnode18912", 00:16:30.146 "tgt_name": "foobar", 00:16:30.146 "method": "nvmf_create_subsystem", 00:16:30.146 "req_id": 1 00:16:30.146 } 00:16:30.146 Got JSON-RPC error response 00:16:30.146 response: 00:16:30.146 { 00:16:30.146 "code": -32603, 00:16:30.146 "message": "Unable to find target foobar" 00:16:30.146 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:30.146 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:30.146 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21777 00:16:30.408 [2024-11-29 13:00:32.955105] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21777: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:30.408 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:30.408 { 00:16:30.408 "nqn": "nqn.2016-06.io.spdk:cnode21777", 00:16:30.408 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:30.408 "method": "nvmf_create_subsystem", 00:16:30.408 "req_id": 1 00:16:30.408 } 00:16:30.408 Got JSON-RPC error response 00:16:30.408 response: 00:16:30.408 { 00:16:30.408 "code": -32602, 00:16:30.408 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:30.408 }' 00:16:30.408 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:30.408 { 00:16:30.408 "nqn": "nqn.2016-06.io.spdk:cnode21777", 00:16:30.408 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:30.408 "method": "nvmf_create_subsystem", 00:16:30.408 "req_id": 1 00:16:30.408 } 00:16:30.408 Got JSON-RPC error response 00:16:30.408 response: 00:16:30.408 { 00:16:30.408 "code": -32602, 00:16:30.408 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:30.408 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:30.408 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:30.408 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31146 00:16:30.669 [2024-11-29 13:00:33.163831] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31146: invalid model number 'SPDK_Controller' 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:30.669 { 00:16:30.669 "nqn": "nqn.2016-06.io.spdk:cnode31146", 00:16:30.669 "model_number": "SPDK_Controller\u001f", 00:16:30.669 "method": "nvmf_create_subsystem", 00:16:30.669 "req_id": 1 00:16:30.669 } 00:16:30.669 Got JSON-RPC error response 00:16:30.669 response: 00:16:30.669 { 00:16:30.669 "code": -32602, 00:16:30.669 "message": "Invalid MN SPDK_Controller\u001f" 00:16:30.669 }' 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:30.669 { 00:16:30.669 "nqn": "nqn.2016-06.io.spdk:cnode31146", 00:16:30.669 "model_number": "SPDK_Controller\u001f", 00:16:30.669 "method": "nvmf_create_subsystem", 00:16:30.669 "req_id": 1 00:16:30.669 } 00:16:30.669 Got JSON-RPC error response 00:16:30.669 response: 00:16:30.669 { 00:16:30.669 "code": -32602, 00:16:30.669 "message": "Invalid MN SPDK_Controller\u001f" 00:16:30.669 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:30.669 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.670 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:16:30.932 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '9;6An&Fn&&o$.m?Y2C`UYbh>0.m?Y2C`UYbh>0.m?Y2C`UYbh>0.m?Y2C`UYbh>0.m?Y2C`UYbh>0 /dev/null' 00:16:33.547 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.464 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:35.464 00:16:35.464 real 0m14.189s 00:16:35.464 user 0m21.262s 00:16:35.464 sys 0m6.684s 00:16:35.464 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.464 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:35.464 ************************************ 00:16:35.464 END TEST nvmf_invalid 00:16:35.464 ************************************ 00:16:35.464 13:00:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:35.464 13:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:35.464 13:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.464 13:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.726 ************************************ 00:16:35.726 START TEST nvmf_connect_stress 00:16:35.726 ************************************ 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:35.726 * Looking for test storage... 00:16:35.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:35.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.726 --rc genhtml_branch_coverage=1 00:16:35.726 --rc genhtml_function_coverage=1 00:16:35.726 --rc genhtml_legend=1 00:16:35.726 --rc geninfo_all_blocks=1 00:16:35.726 --rc geninfo_unexecuted_blocks=1 00:16:35.726 00:16:35.726 ' 00:16:35.726 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:35.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.726 --rc genhtml_branch_coverage=1 00:16:35.726 --rc genhtml_function_coverage=1 00:16:35.727 --rc genhtml_legend=1 00:16:35.727 --rc geninfo_all_blocks=1 00:16:35.727 --rc geninfo_unexecuted_blocks=1 00:16:35.727 00:16:35.727 ' 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:35.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.727 --rc genhtml_branch_coverage=1 00:16:35.727 --rc genhtml_function_coverage=1 00:16:35.727 --rc genhtml_legend=1 00:16:35.727 --rc geninfo_all_blocks=1 00:16:35.727 --rc geninfo_unexecuted_blocks=1 00:16:35.727 00:16:35.727 ' 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:35.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.727 --rc genhtml_branch_coverage=1 00:16:35.727 --rc genhtml_function_coverage=1 00:16:35.727 --rc genhtml_legend=1 00:16:35.727 --rc geninfo_all_blocks=1 00:16:35.727 --rc geninfo_unexecuted_blocks=1 00:16:35.727 00:16:35.727 ' 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.727 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:35.989 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:44.210 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:44.210 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:44.210 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:44.210 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.210 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:44.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:16:44.211 00:16:44.211 --- 10.0.0.2 ping statistics --- 00:16:44.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.211 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:16:44.211 00:16:44.211 --- 10.0.0.1 ping statistics --- 00:16:44.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.211 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=853535 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 853535 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 853535 ']' 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.211 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.211 [2024-11-29 13:00:46.004925] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:16:44.211 [2024-11-29 13:00:46.004999] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.211 [2024-11-29 13:00:46.106243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:44.211 [2024-11-29 13:00:46.158406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.211 [2024-11-29 13:00:46.158458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.211 [2024-11-29 13:00:46.158467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.211 [2024-11-29 13:00:46.158474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.211 [2024-11-29 13:00:46.158481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.211 [2024-11-29 13:00:46.160560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.211 [2024-11-29 13:00:46.160721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.211 [2024-11-29 13:00:46.160722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.211 [2024-11-29 13:00:46.878069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.211 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.473 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.473 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.473 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.473 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.473 [2024-11-29 13:00:46.903746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.473 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.473 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:44.473 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.473 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.473 NULL1 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=853879 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.474 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.734 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.734 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:44.734 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.734 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.734 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.306 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.306 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:45.306 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.306 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.306 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.568 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.568 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:45.568 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.568 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.568 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.830 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.830 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:45.830 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.830 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.830 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.091 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.091 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:46.091 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.091 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.091 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.351 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.351 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:46.351 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.351 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.351 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.922 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.922 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:46.922 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.922 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.922 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.182 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.182 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:47.182 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.182 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.182 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.443 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.443 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:47.443 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.443 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.443 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.703 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.703 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:47.703 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.703 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.703 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.964 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.964 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:47.964 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.964 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.964 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.545 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.545 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:48.545 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.545 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.545 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.808 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.808 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:48.808 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.808 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.809 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.068 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.068 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:49.068 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.068 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.068 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.328 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.328 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:49.328 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.328 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.328 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.588 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.588 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:49.588 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.588 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.588 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.159 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.159 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:50.159 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.159 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.159 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.420 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.420 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:50.420 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.420 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.420 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.680 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.680 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:50.680 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.680 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.680 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.941 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.941 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:50.941 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.941 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.941 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.201 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.201 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:51.201 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.201 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.201 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.771 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.771 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:51.771 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.771 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.771 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.032 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.032 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:52.032 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.032 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.032 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.321 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.321 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:52.321 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.321 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.321 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.582 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.582 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:52.582 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.582 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.582 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.842 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.842 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:52.842 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.842 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.842 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.410 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.410 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:53.410 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.410 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.410 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.670 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.670 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:53.670 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.670 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.670 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.930 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.930 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:53.930 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.930 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.930 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.190 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.190 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:54.190 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.190 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.190 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.450 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 853879 00:16:54.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (853879) - No such process 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 853879 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:54.710 rmmod nvme_tcp 00:16:54.710 rmmod nvme_fabrics 00:16:54.710 rmmod nvme_keyring 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 853535 ']' 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 853535 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 853535 ']' 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 853535 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 853535 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 853535' 00:16:54.710 killing process with pid 853535 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 853535 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 853535 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:54.710 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:54.971 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.971 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.971 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.881 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:56.881 00:16:56.881 real 0m21.290s 00:16:56.881 user 0m42.156s 00:16:56.881 sys 0m9.354s 00:16:56.881 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.881 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.881 ************************************ 00:16:56.881 END TEST nvmf_connect_stress 00:16:56.881 ************************************ 00:16:56.881 13:00:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:56.881 13:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.881 13:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.881 13:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.881 ************************************ 00:16:56.881 START TEST nvmf_fused_ordering 00:16:56.881 ************************************ 00:16:56.881 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:57.143 * Looking for test storage... 00:16:57.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:57.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.143 --rc genhtml_branch_coverage=1 00:16:57.143 --rc genhtml_function_coverage=1 00:16:57.143 --rc genhtml_legend=1 00:16:57.143 --rc geninfo_all_blocks=1 00:16:57.143 --rc geninfo_unexecuted_blocks=1 00:16:57.143 00:16:57.143 ' 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:57.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.143 --rc genhtml_branch_coverage=1 00:16:57.143 --rc genhtml_function_coverage=1 00:16:57.143 --rc genhtml_legend=1 00:16:57.143 --rc geninfo_all_blocks=1 00:16:57.143 --rc geninfo_unexecuted_blocks=1 00:16:57.143 00:16:57.143 ' 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:57.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.143 --rc genhtml_branch_coverage=1 00:16:57.143 --rc genhtml_function_coverage=1 00:16:57.143 --rc genhtml_legend=1 00:16:57.143 --rc geninfo_all_blocks=1 00:16:57.143 --rc geninfo_unexecuted_blocks=1 00:16:57.143 00:16:57.143 ' 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:57.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.143 --rc genhtml_branch_coverage=1 00:16:57.143 --rc genhtml_function_coverage=1 00:16:57.143 --rc genhtml_legend=1 00:16:57.143 --rc geninfo_all_blocks=1 00:16:57.143 --rc geninfo_unexecuted_blocks=1 00:16:57.143 00:16:57.143 ' 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.143 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:57.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:57.144 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:05.290 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:05.290 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.290 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:05.291 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:05.291 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.291 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:05.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:17:05.291 00:17:05.291 --- 10.0.0.2 ping statistics --- 00:17:05.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.291 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:17:05.291 00:17:05.291 --- 10.0.0.1 ping statistics --- 00:17:05.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.291 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=860057 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 860057 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 860057 ']' 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.291 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.291 [2024-11-29 13:01:07.362817] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:17:05.291 [2024-11-29 13:01:07.362885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.291 [2024-11-29 13:01:07.465680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.291 [2024-11-29 13:01:07.516644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.291 [2024-11-29 13:01:07.516700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.291 [2024-11-29 13:01:07.516709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.291 [2024-11-29 13:01:07.516716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.291 [2024-11-29 13:01:07.516723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.291 [2024-11-29 13:01:07.517508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.554 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.554 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:05.554 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.554 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.554 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.554 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.554 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.554 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.554 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.554 [2024-11-29 13:01:08.230616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.815 [2024-11-29 13:01:08.254904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.815 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.815 NULL1 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.816 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:05.816 [2024-11-29 13:01:08.324622] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:17:05.816 [2024-11-29 13:01:08.324674] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860265 ] 00:17:06.389 Attached to nqn.2016-06.io.spdk:cnode1 00:17:06.389 Namespace ID: 1 size: 1GB 00:17:06.389 fused_ordering(0) 00:17:06.389 fused_ordering(1) 00:17:06.389 fused_ordering(2) 00:17:06.389 fused_ordering(3) 00:17:06.389 fused_ordering(4) 00:17:06.389 fused_ordering(5) 00:17:06.389 fused_ordering(6) 00:17:06.389 fused_ordering(7) 00:17:06.389 fused_ordering(8) 00:17:06.389 fused_ordering(9) 00:17:06.389 fused_ordering(10) 00:17:06.389 fused_ordering(11) 00:17:06.389 fused_ordering(12) 00:17:06.389 fused_ordering(13) 00:17:06.389 fused_ordering(14) 00:17:06.389 fused_ordering(15) 00:17:06.389 fused_ordering(16) 00:17:06.389 fused_ordering(17) 00:17:06.389 fused_ordering(18) 00:17:06.389 fused_ordering(19) 00:17:06.389 fused_ordering(20) 00:17:06.389 fused_ordering(21) 00:17:06.389 fused_ordering(22) 00:17:06.389 fused_ordering(23) 00:17:06.389 fused_ordering(24) 00:17:06.389 fused_ordering(25) 00:17:06.389 fused_ordering(26) 00:17:06.389 fused_ordering(27) 00:17:06.389 fused_ordering(28) 00:17:06.389 fused_ordering(29) 00:17:06.389 fused_ordering(30) 00:17:06.389 fused_ordering(31) 00:17:06.389 fused_ordering(32) 00:17:06.389 fused_ordering(33) 00:17:06.389 fused_ordering(34) 00:17:06.389 fused_ordering(35) 00:17:06.389 fused_ordering(36) 00:17:06.389 fused_ordering(37) 00:17:06.389 fused_ordering(38) 00:17:06.389 fused_ordering(39) 00:17:06.389 fused_ordering(40) 00:17:06.389 fused_ordering(41) 00:17:06.389 fused_ordering(42) 00:17:06.389 fused_ordering(43) 00:17:06.389 fused_ordering(44) 00:17:06.389 fused_ordering(45) 00:17:06.389 fused_ordering(46) 00:17:06.389 fused_ordering(47) 00:17:06.389 fused_ordering(48) 00:17:06.389 fused_ordering(49) 00:17:06.389 fused_ordering(50) 00:17:06.389 fused_ordering(51) 00:17:06.389 fused_ordering(52) 00:17:06.389 fused_ordering(53) 00:17:06.389 fused_ordering(54) 00:17:06.389 fused_ordering(55) 00:17:06.389 fused_ordering(56) 00:17:06.389 fused_ordering(57) 00:17:06.389 fused_ordering(58) 00:17:06.389 fused_ordering(59) 00:17:06.389 fused_ordering(60) 00:17:06.389 fused_ordering(61) 00:17:06.389 fused_ordering(62) 00:17:06.389 fused_ordering(63) 00:17:06.389 fused_ordering(64) 00:17:06.389 fused_ordering(65) 00:17:06.389 fused_ordering(66) 00:17:06.389 fused_ordering(67) 00:17:06.389 fused_ordering(68) 00:17:06.389 fused_ordering(69) 00:17:06.389 fused_ordering(70) 00:17:06.389 fused_ordering(71) 00:17:06.389 fused_ordering(72) 00:17:06.389 fused_ordering(73) 00:17:06.389 fused_ordering(74) 00:17:06.389 fused_ordering(75) 00:17:06.389 fused_ordering(76) 00:17:06.389 fused_ordering(77) 00:17:06.389 fused_ordering(78) 00:17:06.389 fused_ordering(79) 00:17:06.389 fused_ordering(80) 00:17:06.389 fused_ordering(81) 00:17:06.389 fused_ordering(82) 00:17:06.389 fused_ordering(83) 00:17:06.389 fused_ordering(84) 00:17:06.389 fused_ordering(85) 00:17:06.389 fused_ordering(86) 00:17:06.389 fused_ordering(87) 00:17:06.389 fused_ordering(88) 00:17:06.389 fused_ordering(89) 00:17:06.389 fused_ordering(90) 00:17:06.389 fused_ordering(91) 00:17:06.389 fused_ordering(92) 00:17:06.389 fused_ordering(93) 00:17:06.389 fused_ordering(94) 00:17:06.389 fused_ordering(95) 00:17:06.389 fused_ordering(96) 00:17:06.389 fused_ordering(97) 00:17:06.389 fused_ordering(98) 00:17:06.390 fused_ordering(99) 00:17:06.390 fused_ordering(100) 00:17:06.390 fused_ordering(101) 00:17:06.390 fused_ordering(102) 00:17:06.390 fused_ordering(103) 00:17:06.390 fused_ordering(104) 00:17:06.390 fused_ordering(105) 00:17:06.390 fused_ordering(106) 00:17:06.390 fused_ordering(107) 00:17:06.390 fused_ordering(108) 00:17:06.390 fused_ordering(109) 00:17:06.390 fused_ordering(110) 00:17:06.390 fused_ordering(111) 00:17:06.390 fused_ordering(112) 00:17:06.390 fused_ordering(113) 00:17:06.390 fused_ordering(114) 00:17:06.390 fused_ordering(115) 00:17:06.390 fused_ordering(116) 00:17:06.390 fused_ordering(117) 00:17:06.390 fused_ordering(118) 00:17:06.390 fused_ordering(119) 00:17:06.390 fused_ordering(120) 00:17:06.390 fused_ordering(121) 00:17:06.390 fused_ordering(122) 00:17:06.390 fused_ordering(123) 00:17:06.390 fused_ordering(124) 00:17:06.390 fused_ordering(125) 00:17:06.390 fused_ordering(126) 00:17:06.390 fused_ordering(127) 00:17:06.390 fused_ordering(128) 00:17:06.390 fused_ordering(129) 00:17:06.390 fused_ordering(130) 00:17:06.390 fused_ordering(131) 00:17:06.390 fused_ordering(132) 00:17:06.390 fused_ordering(133) 00:17:06.390 fused_ordering(134) 00:17:06.390 fused_ordering(135) 00:17:06.390 fused_ordering(136) 00:17:06.390 fused_ordering(137) 00:17:06.390 fused_ordering(138) 00:17:06.390 fused_ordering(139) 00:17:06.390 fused_ordering(140) 00:17:06.390 fused_ordering(141) 00:17:06.390 fused_ordering(142) 00:17:06.390 fused_ordering(143) 00:17:06.390 fused_ordering(144) 00:17:06.390 fused_ordering(145) 00:17:06.390 fused_ordering(146) 00:17:06.390 fused_ordering(147) 00:17:06.390 fused_ordering(148) 00:17:06.390 fused_ordering(149) 00:17:06.390 fused_ordering(150) 00:17:06.390 fused_ordering(151) 00:17:06.390 fused_ordering(152) 00:17:06.390 fused_ordering(153) 00:17:06.390 fused_ordering(154) 00:17:06.390 fused_ordering(155) 00:17:06.390 fused_ordering(156) 00:17:06.390 fused_ordering(157) 00:17:06.390 fused_ordering(158) 00:17:06.390 fused_ordering(159) 00:17:06.390 fused_ordering(160) 00:17:06.390 fused_ordering(161) 00:17:06.390 fused_ordering(162) 00:17:06.390 fused_ordering(163) 00:17:06.390 fused_ordering(164) 00:17:06.390 fused_ordering(165) 00:17:06.390 fused_ordering(166) 00:17:06.390 fused_ordering(167) 00:17:06.390 fused_ordering(168) 00:17:06.390 fused_ordering(169) 00:17:06.390 fused_ordering(170) 00:17:06.390 fused_ordering(171) 00:17:06.390 fused_ordering(172) 00:17:06.390 fused_ordering(173) 00:17:06.390 fused_ordering(174) 00:17:06.390 fused_ordering(175) 00:17:06.390 fused_ordering(176) 00:17:06.390 fused_ordering(177) 00:17:06.390 fused_ordering(178) 00:17:06.390 fused_ordering(179) 00:17:06.390 fused_ordering(180) 00:17:06.390 fused_ordering(181) 00:17:06.390 fused_ordering(182) 00:17:06.390 fused_ordering(183) 00:17:06.390 fused_ordering(184) 00:17:06.390 fused_ordering(185) 00:17:06.390 fused_ordering(186) 00:17:06.390 fused_ordering(187) 00:17:06.390 fused_ordering(188) 00:17:06.390 fused_ordering(189) 00:17:06.390 fused_ordering(190) 00:17:06.390 fused_ordering(191) 00:17:06.390 fused_ordering(192) 00:17:06.390 fused_ordering(193) 00:17:06.390 fused_ordering(194) 00:17:06.390 fused_ordering(195) 00:17:06.390 fused_ordering(196) 00:17:06.390 fused_ordering(197) 00:17:06.390 fused_ordering(198) 00:17:06.390 fused_ordering(199) 00:17:06.390 fused_ordering(200) 00:17:06.390 fused_ordering(201) 00:17:06.390 fused_ordering(202) 00:17:06.390 fused_ordering(203) 00:17:06.390 fused_ordering(204) 00:17:06.390 fused_ordering(205) 00:17:06.651 fused_ordering(206) 00:17:06.651 fused_ordering(207) 00:17:06.651 fused_ordering(208) 00:17:06.651 fused_ordering(209) 00:17:06.651 fused_ordering(210) 00:17:06.651 fused_ordering(211) 00:17:06.651 fused_ordering(212) 00:17:06.651 fused_ordering(213) 00:17:06.651 fused_ordering(214) 00:17:06.651 fused_ordering(215) 00:17:06.651 fused_ordering(216) 00:17:06.651 fused_ordering(217) 00:17:06.651 fused_ordering(218) 00:17:06.651 fused_ordering(219) 00:17:06.651 fused_ordering(220) 00:17:06.651 fused_ordering(221) 00:17:06.651 fused_ordering(222) 00:17:06.651 fused_ordering(223) 00:17:06.651 fused_ordering(224) 00:17:06.651 fused_ordering(225) 00:17:06.651 fused_ordering(226) 00:17:06.651 fused_ordering(227) 00:17:06.651 fused_ordering(228) 00:17:06.651 fused_ordering(229) 00:17:06.651 fused_ordering(230) 00:17:06.651 fused_ordering(231) 00:17:06.651 fused_ordering(232) 00:17:06.651 fused_ordering(233) 00:17:06.651 fused_ordering(234) 00:17:06.651 fused_ordering(235) 00:17:06.651 fused_ordering(236) 00:17:06.651 fused_ordering(237) 00:17:06.651 fused_ordering(238) 00:17:06.651 fused_ordering(239) 00:17:06.651 fused_ordering(240) 00:17:06.651 fused_ordering(241) 00:17:06.651 fused_ordering(242) 00:17:06.651 fused_ordering(243) 00:17:06.651 fused_ordering(244) 00:17:06.651 fused_ordering(245) 00:17:06.651 fused_ordering(246) 00:17:06.651 fused_ordering(247) 00:17:06.651 fused_ordering(248) 00:17:06.651 fused_ordering(249) 00:17:06.651 fused_ordering(250) 00:17:06.651 fused_ordering(251) 00:17:06.651 fused_ordering(252) 00:17:06.651 fused_ordering(253) 00:17:06.651 fused_ordering(254) 00:17:06.651 fused_ordering(255) 00:17:06.651 fused_ordering(256) 00:17:06.651 fused_ordering(257) 00:17:06.651 fused_ordering(258) 00:17:06.651 fused_ordering(259) 00:17:06.651 fused_ordering(260) 00:17:06.651 fused_ordering(261) 00:17:06.651 fused_ordering(262) 00:17:06.651 fused_ordering(263) 00:17:06.651 fused_ordering(264) 00:17:06.651 fused_ordering(265) 00:17:06.651 fused_ordering(266) 00:17:06.651 fused_ordering(267) 00:17:06.651 fused_ordering(268) 00:17:06.651 fused_ordering(269) 00:17:06.651 fused_ordering(270) 00:17:06.651 fused_ordering(271) 00:17:06.651 fused_ordering(272) 00:17:06.651 fused_ordering(273) 00:17:06.651 fused_ordering(274) 00:17:06.651 fused_ordering(275) 00:17:06.651 fused_ordering(276) 00:17:06.651 fused_ordering(277) 00:17:06.651 fused_ordering(278) 00:17:06.651 fused_ordering(279) 00:17:06.651 fused_ordering(280) 00:17:06.651 fused_ordering(281) 00:17:06.651 fused_ordering(282) 00:17:06.651 fused_ordering(283) 00:17:06.651 fused_ordering(284) 00:17:06.651 fused_ordering(285) 00:17:06.651 fused_ordering(286) 00:17:06.651 fused_ordering(287) 00:17:06.651 fused_ordering(288) 00:17:06.651 fused_ordering(289) 00:17:06.651 fused_ordering(290) 00:17:06.651 fused_ordering(291) 00:17:06.651 fused_ordering(292) 00:17:06.651 fused_ordering(293) 00:17:06.651 fused_ordering(294) 00:17:06.651 fused_ordering(295) 00:17:06.651 fused_ordering(296) 00:17:06.651 fused_ordering(297) 00:17:06.651 fused_ordering(298) 00:17:06.651 fused_ordering(299) 00:17:06.651 fused_ordering(300) 00:17:06.651 fused_ordering(301) 00:17:06.651 fused_ordering(302) 00:17:06.651 fused_ordering(303) 00:17:06.651 fused_ordering(304) 00:17:06.651 fused_ordering(305) 00:17:06.651 fused_ordering(306) 00:17:06.651 fused_ordering(307) 00:17:06.651 fused_ordering(308) 00:17:06.651 fused_ordering(309) 00:17:06.651 fused_ordering(310) 00:17:06.651 fused_ordering(311) 00:17:06.651 fused_ordering(312) 00:17:06.651 fused_ordering(313) 00:17:06.651 fused_ordering(314) 00:17:06.651 fused_ordering(315) 00:17:06.651 fused_ordering(316) 00:17:06.651 fused_ordering(317) 00:17:06.651 fused_ordering(318) 00:17:06.651 fused_ordering(319) 00:17:06.651 fused_ordering(320) 00:17:06.651 fused_ordering(321) 00:17:06.651 fused_ordering(322) 00:17:06.651 fused_ordering(323) 00:17:06.651 fused_ordering(324) 00:17:06.651 fused_ordering(325) 00:17:06.651 fused_ordering(326) 00:17:06.651 fused_ordering(327) 00:17:06.651 fused_ordering(328) 00:17:06.651 fused_ordering(329) 00:17:06.651 fused_ordering(330) 00:17:06.651 fused_ordering(331) 00:17:06.651 fused_ordering(332) 00:17:06.651 fused_ordering(333) 00:17:06.651 fused_ordering(334) 00:17:06.651 fused_ordering(335) 00:17:06.651 fused_ordering(336) 00:17:06.651 fused_ordering(337) 00:17:06.651 fused_ordering(338) 00:17:06.651 fused_ordering(339) 00:17:06.651 fused_ordering(340) 00:17:06.651 fused_ordering(341) 00:17:06.651 fused_ordering(342) 00:17:06.651 fused_ordering(343) 00:17:06.651 fused_ordering(344) 00:17:06.651 fused_ordering(345) 00:17:06.652 fused_ordering(346) 00:17:06.652 fused_ordering(347) 00:17:06.652 fused_ordering(348) 00:17:06.652 fused_ordering(349) 00:17:06.652 fused_ordering(350) 00:17:06.652 fused_ordering(351) 00:17:06.652 fused_ordering(352) 00:17:06.652 fused_ordering(353) 00:17:06.652 fused_ordering(354) 00:17:06.652 fused_ordering(355) 00:17:06.652 fused_ordering(356) 00:17:06.652 fused_ordering(357) 00:17:06.652 fused_ordering(358) 00:17:06.652 fused_ordering(359) 00:17:06.652 fused_ordering(360) 00:17:06.652 fused_ordering(361) 00:17:06.652 fused_ordering(362) 00:17:06.652 fused_ordering(363) 00:17:06.652 fused_ordering(364) 00:17:06.652 fused_ordering(365) 00:17:06.652 fused_ordering(366) 00:17:06.652 fused_ordering(367) 00:17:06.652 fused_ordering(368) 00:17:06.652 fused_ordering(369) 00:17:06.652 fused_ordering(370) 00:17:06.652 fused_ordering(371) 00:17:06.652 fused_ordering(372) 00:17:06.652 fused_ordering(373) 00:17:06.652 fused_ordering(374) 00:17:06.652 fused_ordering(375) 00:17:06.652 fused_ordering(376) 00:17:06.652 fused_ordering(377) 00:17:06.652 fused_ordering(378) 00:17:06.652 fused_ordering(379) 00:17:06.652 fused_ordering(380) 00:17:06.652 fused_ordering(381) 00:17:06.652 fused_ordering(382) 00:17:06.652 fused_ordering(383) 00:17:06.652 fused_ordering(384) 00:17:06.652 fused_ordering(385) 00:17:06.652 fused_ordering(386) 00:17:06.652 fused_ordering(387) 00:17:06.652 fused_ordering(388) 00:17:06.652 fused_ordering(389) 00:17:06.652 fused_ordering(390) 00:17:06.652 fused_ordering(391) 00:17:06.652 fused_ordering(392) 00:17:06.652 fused_ordering(393) 00:17:06.652 fused_ordering(394) 00:17:06.652 fused_ordering(395) 00:17:06.652 fused_ordering(396) 00:17:06.652 fused_ordering(397) 00:17:06.652 fused_ordering(398) 00:17:06.652 fused_ordering(399) 00:17:06.652 fused_ordering(400) 00:17:06.652 fused_ordering(401) 00:17:06.652 fused_ordering(402) 00:17:06.652 fused_ordering(403) 00:17:06.652 fused_ordering(404) 00:17:06.652 fused_ordering(405) 00:17:06.652 fused_ordering(406) 00:17:06.652 fused_ordering(407) 00:17:06.652 fused_ordering(408) 00:17:06.652 fused_ordering(409) 00:17:06.652 fused_ordering(410) 00:17:07.223 fused_ordering(411) 00:17:07.223 fused_ordering(412) 00:17:07.223 fused_ordering(413) 00:17:07.223 fused_ordering(414) 00:17:07.223 fused_ordering(415) 00:17:07.223 fused_ordering(416) 00:17:07.223 fused_ordering(417) 00:17:07.223 fused_ordering(418) 00:17:07.223 fused_ordering(419) 00:17:07.223 fused_ordering(420) 00:17:07.223 fused_ordering(421) 00:17:07.223 fused_ordering(422) 00:17:07.223 fused_ordering(423) 00:17:07.223 fused_ordering(424) 00:17:07.223 fused_ordering(425) 00:17:07.223 fused_ordering(426) 00:17:07.223 fused_ordering(427) 00:17:07.223 fused_ordering(428) 00:17:07.223 fused_ordering(429) 00:17:07.223 fused_ordering(430) 00:17:07.223 fused_ordering(431) 00:17:07.223 fused_ordering(432) 00:17:07.223 fused_ordering(433) 00:17:07.223 fused_ordering(434) 00:17:07.223 fused_ordering(435) 00:17:07.223 fused_ordering(436) 00:17:07.223 fused_ordering(437) 00:17:07.223 fused_ordering(438) 00:17:07.223 fused_ordering(439) 00:17:07.223 fused_ordering(440) 00:17:07.223 fused_ordering(441) 00:17:07.223 fused_ordering(442) 00:17:07.223 fused_ordering(443) 00:17:07.223 fused_ordering(444) 00:17:07.223 fused_ordering(445) 00:17:07.223 fused_ordering(446) 00:17:07.223 fused_ordering(447) 00:17:07.223 fused_ordering(448) 00:17:07.223 fused_ordering(449) 00:17:07.223 fused_ordering(450) 00:17:07.223 fused_ordering(451) 00:17:07.223 fused_ordering(452) 00:17:07.223 fused_ordering(453) 00:17:07.223 fused_ordering(454) 00:17:07.223 fused_ordering(455) 00:17:07.223 fused_ordering(456) 00:17:07.223 fused_ordering(457) 00:17:07.223 fused_ordering(458) 00:17:07.223 fused_ordering(459) 00:17:07.223 fused_ordering(460) 00:17:07.223 fused_ordering(461) 00:17:07.223 fused_ordering(462) 00:17:07.223 fused_ordering(463) 00:17:07.223 fused_ordering(464) 00:17:07.223 fused_ordering(465) 00:17:07.223 fused_ordering(466) 00:17:07.223 fused_ordering(467) 00:17:07.223 fused_ordering(468) 00:17:07.223 fused_ordering(469) 00:17:07.223 fused_ordering(470) 00:17:07.223 fused_ordering(471) 00:17:07.223 fused_ordering(472) 00:17:07.223 fused_ordering(473) 00:17:07.223 fused_ordering(474) 00:17:07.223 fused_ordering(475) 00:17:07.223 fused_ordering(476) 00:17:07.223 fused_ordering(477) 00:17:07.223 fused_ordering(478) 00:17:07.223 fused_ordering(479) 00:17:07.223 fused_ordering(480) 00:17:07.223 fused_ordering(481) 00:17:07.223 fused_ordering(482) 00:17:07.223 fused_ordering(483) 00:17:07.223 fused_ordering(484) 00:17:07.223 fused_ordering(485) 00:17:07.223 fused_ordering(486) 00:17:07.223 fused_ordering(487) 00:17:07.223 fused_ordering(488) 00:17:07.223 fused_ordering(489) 00:17:07.223 fused_ordering(490) 00:17:07.223 fused_ordering(491) 00:17:07.223 fused_ordering(492) 00:17:07.223 fused_ordering(493) 00:17:07.223 fused_ordering(494) 00:17:07.223 fused_ordering(495) 00:17:07.223 fused_ordering(496) 00:17:07.223 fused_ordering(497) 00:17:07.223 fused_ordering(498) 00:17:07.223 fused_ordering(499) 00:17:07.223 fused_ordering(500) 00:17:07.223 fused_ordering(501) 00:17:07.223 fused_ordering(502) 00:17:07.223 fused_ordering(503) 00:17:07.223 fused_ordering(504) 00:17:07.223 fused_ordering(505) 00:17:07.223 fused_ordering(506) 00:17:07.223 fused_ordering(507) 00:17:07.223 fused_ordering(508) 00:17:07.223 fused_ordering(509) 00:17:07.223 fused_ordering(510) 00:17:07.223 fused_ordering(511) 00:17:07.223 fused_ordering(512) 00:17:07.223 fused_ordering(513) 00:17:07.223 fused_ordering(514) 00:17:07.223 fused_ordering(515) 00:17:07.223 fused_ordering(516) 00:17:07.223 fused_ordering(517) 00:17:07.223 fused_ordering(518) 00:17:07.223 fused_ordering(519) 00:17:07.223 fused_ordering(520) 00:17:07.223 fused_ordering(521) 00:17:07.223 fused_ordering(522) 00:17:07.223 fused_ordering(523) 00:17:07.223 fused_ordering(524) 00:17:07.223 fused_ordering(525) 00:17:07.223 fused_ordering(526) 00:17:07.224 fused_ordering(527) 00:17:07.224 fused_ordering(528) 00:17:07.224 fused_ordering(529) 00:17:07.224 fused_ordering(530) 00:17:07.224 fused_ordering(531) 00:17:07.224 fused_ordering(532) 00:17:07.224 fused_ordering(533) 00:17:07.224 fused_ordering(534) 00:17:07.224 fused_ordering(535) 00:17:07.224 fused_ordering(536) 00:17:07.224 fused_ordering(537) 00:17:07.224 fused_ordering(538) 00:17:07.224 fused_ordering(539) 00:17:07.224 fused_ordering(540) 00:17:07.224 fused_ordering(541) 00:17:07.224 fused_ordering(542) 00:17:07.224 fused_ordering(543) 00:17:07.224 fused_ordering(544) 00:17:07.224 fused_ordering(545) 00:17:07.224 fused_ordering(546) 00:17:07.224 fused_ordering(547) 00:17:07.224 fused_ordering(548) 00:17:07.224 fused_ordering(549) 00:17:07.224 fused_ordering(550) 00:17:07.224 fused_ordering(551) 00:17:07.224 fused_ordering(552) 00:17:07.224 fused_ordering(553) 00:17:07.224 fused_ordering(554) 00:17:07.224 fused_ordering(555) 00:17:07.224 fused_ordering(556) 00:17:07.224 fused_ordering(557) 00:17:07.224 fused_ordering(558) 00:17:07.224 fused_ordering(559) 00:17:07.224 fused_ordering(560) 00:17:07.224 fused_ordering(561) 00:17:07.224 fused_ordering(562) 00:17:07.224 fused_ordering(563) 00:17:07.224 fused_ordering(564) 00:17:07.224 fused_ordering(565) 00:17:07.224 fused_ordering(566) 00:17:07.224 fused_ordering(567) 00:17:07.224 fused_ordering(568) 00:17:07.224 fused_ordering(569) 00:17:07.224 fused_ordering(570) 00:17:07.224 fused_ordering(571) 00:17:07.224 fused_ordering(572) 00:17:07.224 fused_ordering(573) 00:17:07.224 fused_ordering(574) 00:17:07.224 fused_ordering(575) 00:17:07.224 fused_ordering(576) 00:17:07.224 fused_ordering(577) 00:17:07.224 fused_ordering(578) 00:17:07.224 fused_ordering(579) 00:17:07.224 fused_ordering(580) 00:17:07.224 fused_ordering(581) 00:17:07.224 fused_ordering(582) 00:17:07.224 fused_ordering(583) 00:17:07.224 fused_ordering(584) 00:17:07.224 fused_ordering(585) 00:17:07.224 fused_ordering(586) 00:17:07.224 fused_ordering(587) 00:17:07.224 fused_ordering(588) 00:17:07.224 fused_ordering(589) 00:17:07.224 fused_ordering(590) 00:17:07.224 fused_ordering(591) 00:17:07.224 fused_ordering(592) 00:17:07.224 fused_ordering(593) 00:17:07.224 fused_ordering(594) 00:17:07.224 fused_ordering(595) 00:17:07.224 fused_ordering(596) 00:17:07.224 fused_ordering(597) 00:17:07.224 fused_ordering(598) 00:17:07.224 fused_ordering(599) 00:17:07.224 fused_ordering(600) 00:17:07.224 fused_ordering(601) 00:17:07.224 fused_ordering(602) 00:17:07.224 fused_ordering(603) 00:17:07.224 fused_ordering(604) 00:17:07.224 fused_ordering(605) 00:17:07.224 fused_ordering(606) 00:17:07.224 fused_ordering(607) 00:17:07.224 fused_ordering(608) 00:17:07.224 fused_ordering(609) 00:17:07.224 fused_ordering(610) 00:17:07.224 fused_ordering(611) 00:17:07.224 fused_ordering(612) 00:17:07.224 fused_ordering(613) 00:17:07.224 fused_ordering(614) 00:17:07.224 fused_ordering(615) 00:17:07.796 fused_ordering(616) 00:17:07.796 fused_ordering(617) 00:17:07.796 fused_ordering(618) 00:17:07.796 fused_ordering(619) 00:17:07.796 fused_ordering(620) 00:17:07.796 fused_ordering(621) 00:17:07.796 fused_ordering(622) 00:17:07.796 fused_ordering(623) 00:17:07.796 fused_ordering(624) 00:17:07.796 fused_ordering(625) 00:17:07.796 fused_ordering(626) 00:17:07.796 fused_ordering(627) 00:17:07.796 fused_ordering(628) 00:17:07.796 fused_ordering(629) 00:17:07.796 fused_ordering(630) 00:17:07.796 fused_ordering(631) 00:17:07.796 fused_ordering(632) 00:17:07.796 fused_ordering(633) 00:17:07.796 fused_ordering(634) 00:17:07.796 fused_ordering(635) 00:17:07.796 fused_ordering(636) 00:17:07.796 fused_ordering(637) 00:17:07.796 fused_ordering(638) 00:17:07.796 fused_ordering(639) 00:17:07.796 fused_ordering(640) 00:17:07.796 fused_ordering(641) 00:17:07.796 fused_ordering(642) 00:17:07.796 fused_ordering(643) 00:17:07.796 fused_ordering(644) 00:17:07.796 fused_ordering(645) 00:17:07.796 fused_ordering(646) 00:17:07.796 fused_ordering(647) 00:17:07.796 fused_ordering(648) 00:17:07.796 fused_ordering(649) 00:17:07.796 fused_ordering(650) 00:17:07.796 fused_ordering(651) 00:17:07.796 fused_ordering(652) 00:17:07.796 fused_ordering(653) 00:17:07.796 fused_ordering(654) 00:17:07.796 fused_ordering(655) 00:17:07.796 fused_ordering(656) 00:17:07.796 fused_ordering(657) 00:17:07.796 fused_ordering(658) 00:17:07.796 fused_ordering(659) 00:17:07.796 fused_ordering(660) 00:17:07.796 fused_ordering(661) 00:17:07.796 fused_ordering(662) 00:17:07.796 fused_ordering(663) 00:17:07.796 fused_ordering(664) 00:17:07.796 fused_ordering(665) 00:17:07.796 fused_ordering(666) 00:17:07.796 fused_ordering(667) 00:17:07.796 fused_ordering(668) 00:17:07.796 fused_ordering(669) 00:17:07.796 fused_ordering(670) 00:17:07.796 fused_ordering(671) 00:17:07.796 fused_ordering(672) 00:17:07.796 fused_ordering(673) 00:17:07.796 fused_ordering(674) 00:17:07.796 fused_ordering(675) 00:17:07.796 fused_ordering(676) 00:17:07.796 fused_ordering(677) 00:17:07.796 fused_ordering(678) 00:17:07.796 fused_ordering(679) 00:17:07.796 fused_ordering(680) 00:17:07.796 fused_ordering(681) 00:17:07.796 fused_ordering(682) 00:17:07.796 fused_ordering(683) 00:17:07.796 fused_ordering(684) 00:17:07.796 fused_ordering(685) 00:17:07.796 fused_ordering(686) 00:17:07.796 fused_ordering(687) 00:17:07.796 fused_ordering(688) 00:17:07.796 fused_ordering(689) 00:17:07.796 fused_ordering(690) 00:17:07.796 fused_ordering(691) 00:17:07.796 fused_ordering(692) 00:17:07.796 fused_ordering(693) 00:17:07.796 fused_ordering(694) 00:17:07.796 fused_ordering(695) 00:17:07.796 fused_ordering(696) 00:17:07.796 fused_ordering(697) 00:17:07.796 fused_ordering(698) 00:17:07.796 fused_ordering(699) 00:17:07.796 fused_ordering(700) 00:17:07.796 fused_ordering(701) 00:17:07.796 fused_ordering(702) 00:17:07.796 fused_ordering(703) 00:17:07.796 fused_ordering(704) 00:17:07.796 fused_ordering(705) 00:17:07.796 fused_ordering(706) 00:17:07.796 fused_ordering(707) 00:17:07.796 fused_ordering(708) 00:17:07.796 fused_ordering(709) 00:17:07.796 fused_ordering(710) 00:17:07.796 fused_ordering(711) 00:17:07.796 fused_ordering(712) 00:17:07.796 fused_ordering(713) 00:17:07.796 fused_ordering(714) 00:17:07.796 fused_ordering(715) 00:17:07.796 fused_ordering(716) 00:17:07.796 fused_ordering(717) 00:17:07.796 fused_ordering(718) 00:17:07.796 fused_ordering(719) 00:17:07.796 fused_ordering(720) 00:17:07.796 fused_ordering(721) 00:17:07.796 fused_ordering(722) 00:17:07.796 fused_ordering(723) 00:17:07.796 fused_ordering(724) 00:17:07.796 fused_ordering(725) 00:17:07.796 fused_ordering(726) 00:17:07.796 fused_ordering(727) 00:17:07.796 fused_ordering(728) 00:17:07.796 fused_ordering(729) 00:17:07.796 fused_ordering(730) 00:17:07.796 fused_ordering(731) 00:17:07.796 fused_ordering(732) 00:17:07.796 fused_ordering(733) 00:17:07.796 fused_ordering(734) 00:17:07.796 fused_ordering(735) 00:17:07.796 fused_ordering(736) 00:17:07.796 fused_ordering(737) 00:17:07.796 fused_ordering(738) 00:17:07.796 fused_ordering(739) 00:17:07.796 fused_ordering(740) 00:17:07.796 fused_ordering(741) 00:17:07.796 fused_ordering(742) 00:17:07.796 fused_ordering(743) 00:17:07.796 fused_ordering(744) 00:17:07.796 fused_ordering(745) 00:17:07.796 fused_ordering(746) 00:17:07.796 fused_ordering(747) 00:17:07.796 fused_ordering(748) 00:17:07.796 fused_ordering(749) 00:17:07.796 fused_ordering(750) 00:17:07.796 fused_ordering(751) 00:17:07.796 fused_ordering(752) 00:17:07.796 fused_ordering(753) 00:17:07.796 fused_ordering(754) 00:17:07.796 fused_ordering(755) 00:17:07.796 fused_ordering(756) 00:17:07.796 fused_ordering(757) 00:17:07.796 fused_ordering(758) 00:17:07.796 fused_ordering(759) 00:17:07.796 fused_ordering(760) 00:17:07.796 fused_ordering(761) 00:17:07.796 fused_ordering(762) 00:17:07.796 fused_ordering(763) 00:17:07.796 fused_ordering(764) 00:17:07.796 fused_ordering(765) 00:17:07.796 fused_ordering(766) 00:17:07.796 fused_ordering(767) 00:17:07.796 fused_ordering(768) 00:17:07.796 fused_ordering(769) 00:17:07.796 fused_ordering(770) 00:17:07.796 fused_ordering(771) 00:17:07.796 fused_ordering(772) 00:17:07.797 fused_ordering(773) 00:17:07.797 fused_ordering(774) 00:17:07.797 fused_ordering(775) 00:17:07.797 fused_ordering(776) 00:17:07.797 fused_ordering(777) 00:17:07.797 fused_ordering(778) 00:17:07.797 fused_ordering(779) 00:17:07.797 fused_ordering(780) 00:17:07.797 fused_ordering(781) 00:17:07.797 fused_ordering(782) 00:17:07.797 fused_ordering(783) 00:17:07.797 fused_ordering(784) 00:17:07.797 fused_ordering(785) 00:17:07.797 fused_ordering(786) 00:17:07.797 fused_ordering(787) 00:17:07.797 fused_ordering(788) 00:17:07.797 fused_ordering(789) 00:17:07.797 fused_ordering(790) 00:17:07.797 fused_ordering(791) 00:17:07.797 fused_ordering(792) 00:17:07.797 fused_ordering(793) 00:17:07.797 fused_ordering(794) 00:17:07.797 fused_ordering(795) 00:17:07.797 fused_ordering(796) 00:17:07.797 fused_ordering(797) 00:17:07.797 fused_ordering(798) 00:17:07.797 fused_ordering(799) 00:17:07.797 fused_ordering(800) 00:17:07.797 fused_ordering(801) 00:17:07.797 fused_ordering(802) 00:17:07.797 fused_ordering(803) 00:17:07.797 fused_ordering(804) 00:17:07.797 fused_ordering(805) 00:17:07.797 fused_ordering(806) 00:17:07.797 fused_ordering(807) 00:17:07.797 fused_ordering(808) 00:17:07.797 fused_ordering(809) 00:17:07.797 fused_ordering(810) 00:17:07.797 fused_ordering(811) 00:17:07.797 fused_ordering(812) 00:17:07.797 fused_ordering(813) 00:17:07.797 fused_ordering(814) 00:17:07.797 fused_ordering(815) 00:17:07.797 fused_ordering(816) 00:17:07.797 fused_ordering(817) 00:17:07.797 fused_ordering(818) 00:17:07.797 fused_ordering(819) 00:17:07.797 fused_ordering(820) 00:17:08.368 fused_ordering(821) 00:17:08.368 fused_ordering(822) 00:17:08.368 fused_ordering(823) 00:17:08.368 fused_ordering(824) 00:17:08.368 fused_ordering(825) 00:17:08.368 fused_ordering(826) 00:17:08.368 fused_ordering(827) 00:17:08.368 fused_ordering(828) 00:17:08.368 fused_ordering(829) 00:17:08.368 fused_ordering(830) 00:17:08.368 fused_ordering(831) 00:17:08.368 fused_ordering(832) 00:17:08.368 fused_ordering(833) 00:17:08.368 fused_ordering(834) 00:17:08.368 fused_ordering(835) 00:17:08.368 fused_ordering(836) 00:17:08.368 fused_ordering(837) 00:17:08.368 fused_ordering(838) 00:17:08.368 fused_ordering(839) 00:17:08.368 fused_ordering(840) 00:17:08.368 fused_ordering(841) 00:17:08.369 fused_ordering(842) 00:17:08.369 fused_ordering(843) 00:17:08.369 fused_ordering(844) 00:17:08.369 fused_ordering(845) 00:17:08.369 fused_ordering(846) 00:17:08.369 fused_ordering(847) 00:17:08.369 fused_ordering(848) 00:17:08.369 fused_ordering(849) 00:17:08.369 fused_ordering(850) 00:17:08.369 fused_ordering(851) 00:17:08.369 fused_ordering(852) 00:17:08.369 fused_ordering(853) 00:17:08.369 fused_ordering(854) 00:17:08.369 fused_ordering(855) 00:17:08.369 fused_ordering(856) 00:17:08.369 fused_ordering(857) 00:17:08.369 fused_ordering(858) 00:17:08.369 fused_ordering(859) 00:17:08.369 fused_ordering(860) 00:17:08.369 fused_ordering(861) 00:17:08.369 fused_ordering(862) 00:17:08.369 fused_ordering(863) 00:17:08.369 fused_ordering(864) 00:17:08.369 fused_ordering(865) 00:17:08.369 fused_ordering(866) 00:17:08.369 fused_ordering(867) 00:17:08.369 fused_ordering(868) 00:17:08.369 fused_ordering(869) 00:17:08.369 fused_ordering(870) 00:17:08.369 fused_ordering(871) 00:17:08.369 fused_ordering(872) 00:17:08.369 fused_ordering(873) 00:17:08.369 fused_ordering(874) 00:17:08.369 fused_ordering(875) 00:17:08.369 fused_ordering(876) 00:17:08.369 fused_ordering(877) 00:17:08.369 fused_ordering(878) 00:17:08.369 fused_ordering(879) 00:17:08.369 fused_ordering(880) 00:17:08.369 fused_ordering(881) 00:17:08.369 fused_ordering(882) 00:17:08.369 fused_ordering(883) 00:17:08.369 fused_ordering(884) 00:17:08.369 fused_ordering(885) 00:17:08.369 fused_ordering(886) 00:17:08.369 fused_ordering(887) 00:17:08.369 fused_ordering(888) 00:17:08.369 fused_ordering(889) 00:17:08.369 fused_ordering(890) 00:17:08.369 fused_ordering(891) 00:17:08.369 fused_ordering(892) 00:17:08.369 fused_ordering(893) 00:17:08.369 fused_ordering(894) 00:17:08.369 fused_ordering(895) 00:17:08.369 fused_ordering(896) 00:17:08.369 fused_ordering(897) 00:17:08.369 fused_ordering(898) 00:17:08.369 fused_ordering(899) 00:17:08.369 fused_ordering(900) 00:17:08.369 fused_ordering(901) 00:17:08.369 fused_ordering(902) 00:17:08.369 fused_ordering(903) 00:17:08.369 fused_ordering(904) 00:17:08.369 fused_ordering(905) 00:17:08.369 fused_ordering(906) 00:17:08.369 fused_ordering(907) 00:17:08.369 fused_ordering(908) 00:17:08.369 fused_ordering(909) 00:17:08.369 fused_ordering(910) 00:17:08.369 fused_ordering(911) 00:17:08.369 fused_ordering(912) 00:17:08.369 fused_ordering(913) 00:17:08.369 fused_ordering(914) 00:17:08.369 fused_ordering(915) 00:17:08.369 fused_ordering(916) 00:17:08.369 fused_ordering(917) 00:17:08.369 fused_ordering(918) 00:17:08.369 fused_ordering(919) 00:17:08.369 fused_ordering(920) 00:17:08.369 fused_ordering(921) 00:17:08.369 fused_ordering(922) 00:17:08.369 fused_ordering(923) 00:17:08.369 fused_ordering(924) 00:17:08.369 fused_ordering(925) 00:17:08.369 fused_ordering(926) 00:17:08.369 fused_ordering(927) 00:17:08.369 fused_ordering(928) 00:17:08.369 fused_ordering(929) 00:17:08.369 fused_ordering(930) 00:17:08.369 fused_ordering(931) 00:17:08.369 fused_ordering(932) 00:17:08.369 fused_ordering(933) 00:17:08.369 fused_ordering(934) 00:17:08.369 fused_ordering(935) 00:17:08.369 fused_ordering(936) 00:17:08.369 fused_ordering(937) 00:17:08.369 fused_ordering(938) 00:17:08.369 fused_ordering(939) 00:17:08.369 fused_ordering(940) 00:17:08.369 fused_ordering(941) 00:17:08.369 fused_ordering(942) 00:17:08.369 fused_ordering(943) 00:17:08.369 fused_ordering(944) 00:17:08.369 fused_ordering(945) 00:17:08.369 fused_ordering(946) 00:17:08.369 fused_ordering(947) 00:17:08.369 fused_ordering(948) 00:17:08.369 fused_ordering(949) 00:17:08.369 fused_ordering(950) 00:17:08.369 fused_ordering(951) 00:17:08.369 fused_ordering(952) 00:17:08.369 fused_ordering(953) 00:17:08.369 fused_ordering(954) 00:17:08.369 fused_ordering(955) 00:17:08.369 fused_ordering(956) 00:17:08.369 fused_ordering(957) 00:17:08.369 fused_ordering(958) 00:17:08.369 fused_ordering(959) 00:17:08.369 fused_ordering(960) 00:17:08.369 fused_ordering(961) 00:17:08.369 fused_ordering(962) 00:17:08.369 fused_ordering(963) 00:17:08.369 fused_ordering(964) 00:17:08.369 fused_ordering(965) 00:17:08.369 fused_ordering(966) 00:17:08.369 fused_ordering(967) 00:17:08.369 fused_ordering(968) 00:17:08.369 fused_ordering(969) 00:17:08.369 fused_ordering(970) 00:17:08.369 fused_ordering(971) 00:17:08.369 fused_ordering(972) 00:17:08.369 fused_ordering(973) 00:17:08.369 fused_ordering(974) 00:17:08.369 fused_ordering(975) 00:17:08.369 fused_ordering(976) 00:17:08.369 fused_ordering(977) 00:17:08.369 fused_ordering(978) 00:17:08.369 fused_ordering(979) 00:17:08.369 fused_ordering(980) 00:17:08.369 fused_ordering(981) 00:17:08.369 fused_ordering(982) 00:17:08.369 fused_ordering(983) 00:17:08.369 fused_ordering(984) 00:17:08.369 fused_ordering(985) 00:17:08.369 fused_ordering(986) 00:17:08.369 fused_ordering(987) 00:17:08.369 fused_ordering(988) 00:17:08.369 fused_ordering(989) 00:17:08.369 fused_ordering(990) 00:17:08.369 fused_ordering(991) 00:17:08.369 fused_ordering(992) 00:17:08.369 fused_ordering(993) 00:17:08.369 fused_ordering(994) 00:17:08.369 fused_ordering(995) 00:17:08.369 fused_ordering(996) 00:17:08.369 fused_ordering(997) 00:17:08.369 fused_ordering(998) 00:17:08.369 fused_ordering(999) 00:17:08.369 fused_ordering(1000) 00:17:08.369 fused_ordering(1001) 00:17:08.369 fused_ordering(1002) 00:17:08.369 fused_ordering(1003) 00:17:08.369 fused_ordering(1004) 00:17:08.369 fused_ordering(1005) 00:17:08.369 fused_ordering(1006) 00:17:08.369 fused_ordering(1007) 00:17:08.369 fused_ordering(1008) 00:17:08.369 fused_ordering(1009) 00:17:08.369 fused_ordering(1010) 00:17:08.369 fused_ordering(1011) 00:17:08.369 fused_ordering(1012) 00:17:08.369 fused_ordering(1013) 00:17:08.369 fused_ordering(1014) 00:17:08.369 fused_ordering(1015) 00:17:08.369 fused_ordering(1016) 00:17:08.369 fused_ordering(1017) 00:17:08.369 fused_ordering(1018) 00:17:08.369 fused_ordering(1019) 00:17:08.369 fused_ordering(1020) 00:17:08.369 fused_ordering(1021) 00:17:08.369 fused_ordering(1022) 00:17:08.369 fused_ordering(1023) 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:08.369 rmmod nvme_tcp 00:17:08.369 rmmod nvme_fabrics 00:17:08.369 rmmod nvme_keyring 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 860057 ']' 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 860057 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 860057 ']' 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 860057 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 860057 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 860057' 00:17:08.369 killing process with pid 860057 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 860057 00:17:08.369 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 860057 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.630 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:11.173 00:17:11.173 real 0m13.696s 00:17:11.173 user 0m7.403s 00:17:11.173 sys 0m7.294s 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:11.173 ************************************ 00:17:11.173 END TEST nvmf_fused_ordering 00:17:11.173 ************************************ 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.173 ************************************ 00:17:11.173 START TEST nvmf_ns_masking 00:17:11.173 ************************************ 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:11.173 * Looking for test storage... 00:17:11.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:11.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.173 --rc genhtml_branch_coverage=1 00:17:11.173 --rc genhtml_function_coverage=1 00:17:11.173 --rc genhtml_legend=1 00:17:11.173 --rc geninfo_all_blocks=1 00:17:11.173 --rc geninfo_unexecuted_blocks=1 00:17:11.173 00:17:11.173 ' 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:11.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.173 --rc genhtml_branch_coverage=1 00:17:11.173 --rc genhtml_function_coverage=1 00:17:11.173 --rc genhtml_legend=1 00:17:11.173 --rc geninfo_all_blocks=1 00:17:11.173 --rc geninfo_unexecuted_blocks=1 00:17:11.173 00:17:11.173 ' 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:11.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.173 --rc genhtml_branch_coverage=1 00:17:11.173 --rc genhtml_function_coverage=1 00:17:11.173 --rc genhtml_legend=1 00:17:11.173 --rc geninfo_all_blocks=1 00:17:11.173 --rc geninfo_unexecuted_blocks=1 00:17:11.173 00:17:11.173 ' 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:11.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.173 --rc genhtml_branch_coverage=1 00:17:11.173 --rc genhtml_function_coverage=1 00:17:11.173 --rc genhtml_legend=1 00:17:11.173 --rc geninfo_all_blocks=1 00:17:11.173 --rc geninfo_unexecuted_blocks=1 00:17:11.173 00:17:11.173 ' 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.173 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8931f9cd-6545-4817-912b-ef7b049b4155 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2fc8e7d5-5296-40b1-aa99-43a9ad154948 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b26856ba-fa3c-45b6-a7d7-4c4c1c986d6b 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:11.174 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:19.318 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:19.319 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:19.319 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:19.319 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:19.319 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:19.319 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:19.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:17:19.319 00:17:19.319 --- 10.0.0.2 ping statistics --- 00:17:19.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.319 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:17:19.319 00:17:19.319 --- 10.0.0.1 ping statistics --- 00:17:19.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.319 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=864938 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 864938 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 864938 ']' 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.319 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:19.319 [2024-11-29 13:01:21.206461] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:17:19.319 [2024-11-29 13:01:21.206530] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.319 [2024-11-29 13:01:21.310878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.319 [2024-11-29 13:01:21.362652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.319 [2024-11-29 13:01:21.362707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.319 [2024-11-29 13:01:21.362716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.319 [2024-11-29 13:01:21.362723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.319 [2024-11-29 13:01:21.362729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.319 [2024-11-29 13:01:21.363514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.582 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.582 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:19.582 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:19.582 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.582 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:19.582 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.582 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:19.582 [2024-11-29 13:01:22.236979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.843 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:19.843 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:19.843 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:19.843 Malloc1 00:17:19.843 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:20.104 Malloc2 00:17:20.104 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:20.365 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:20.627 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.627 [2024-11-29 13:01:23.284808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.627 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:20.627 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b26856ba-fa3c-45b6-a7d7-4c4c1c986d6b -a 10.0.0.2 -s 4420 -i 4 00:17:20.888 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.888 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:20.888 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.888 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:20.888 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.435 [ 0]:0x1 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=42d028a309df43c1b9bbbc082a267f84 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 42d028a309df43c1b9bbbc082a267f84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.435 [ 0]:0x1 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=42d028a309df43c1b9bbbc082a267f84 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 42d028a309df43c1b9bbbc082a267f84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:23.435 [ 1]:0x2 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3e35699e753d4cb192ef61ce8451a4f7 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3e35699e753d4cb192ef61ce8451a4f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:23.435 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.435 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.696 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:23.957 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:23.957 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b26856ba-fa3c-45b6-a7d7-4c4c1c986d6b -a 10.0.0.2 -s 4420 -i 4 00:17:23.957 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:23.957 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:23.957 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.957 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:23.957 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:23.957 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:26.505 [ 0]:0x2 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3e35699e753d4cb192ef61ce8451a4f7 00:17:26.505 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3e35699e753d4cb192ef61ce8451a4f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.506 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:26.506 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:26.506 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.506 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:26.506 [ 0]:0x1 00:17:26.506 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:26.506 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=42d028a309df43c1b9bbbc082a267f84 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 42d028a309df43c1b9bbbc082a267f84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:26.506 [ 1]:0x2 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3e35699e753d4cb192ef61ce8451a4f7 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3e35699e753d4cb192ef61ce8451a4f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.506 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:26.767 [ 0]:0x2 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3e35699e753d4cb192ef61ce8451a4f7 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3e35699e753d4cb192ef61ce8451a4f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.767 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:27.028 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:27.028 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b26856ba-fa3c-45b6-a7d7-4c4c1c986d6b -a 10.0.0.2 -s 4420 -i 4 00:17:27.289 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:27.289 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:27.289 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:27.289 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:27.289 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:27.289 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:29.200 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:29.200 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:29.200 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:29.200 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:29.200 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:29.200 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:29.200 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:29.200 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:29.460 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:29.460 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:29.460 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:29.460 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.460 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:29.460 [ 0]:0x1 00:17:29.460 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:29.460 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.460 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=42d028a309df43c1b9bbbc082a267f84 00:17:29.461 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 42d028a309df43c1b9bbbc082a267f84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.461 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:29.461 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:29.461 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.461 [ 1]:0x2 00:17:29.461 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:29.461 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.461 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3e35699e753d4cb192ef61ce8451a4f7 00:17:29.461 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3e35699e753d4cb192ef61ce8451a4f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.461 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.722 [ 0]:0x2 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3e35699e753d4cb192ef61ce8451a4f7 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3e35699e753d4cb192ef61ce8451a4f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:29.722 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:29.983 [2024-11-29 13:01:32.490332] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:29.983 request: 00:17:29.983 { 00:17:29.983 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.983 "nsid": 2, 00:17:29.983 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.983 "method": "nvmf_ns_remove_host", 00:17:29.983 "req_id": 1 00:17:29.983 } 00:17:29.983 Got JSON-RPC error response 00:17:29.983 response: 00:17:29.983 { 00:17:29.983 "code": -32602, 00:17:29.983 "message": "Invalid parameters" 00:17:29.983 } 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:29.983 [ 0]:0x2 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3e35699e753d4cb192ef61ce8451a4f7 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3e35699e753d4cb192ef61ce8451a4f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:29.983 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:30.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=867437 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 867437 /var/tmp/host.sock 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 867437 ']' 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:30.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.244 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:30.244 [2024-11-29 13:01:32.744688] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:17:30.244 [2024-11-29 13:01:32.744737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867437 ] 00:17:30.244 [2024-11-29 13:01:32.831046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.244 [2024-11-29 13:01:32.866630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.186 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.186 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:31.186 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.186 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:31.448 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8931f9cd-6545-4817-912b-ef7b049b4155 00:17:31.448 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:31.448 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8931F9CD65454817912BEF7B049B4155 -i 00:17:31.448 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2fc8e7d5-5296-40b1-aa99-43a9ad154948 00:17:31.448 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:31.448 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2FC8E7D5529640B1AA9943A9AD154948 -i 00:17:31.709 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:31.971 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:31.971 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:31.971 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:32.323 nvme0n1 00:17:32.323 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:32.323 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:32.585 nvme1n2 00:17:32.586 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:32.586 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:32.586 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:32.586 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:32.586 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:32.846 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:32.846 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:32.846 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:32.846 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:33.107 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8931f9cd-6545-4817-912b-ef7b049b4155 == \8\9\3\1\f\9\c\d\-\6\5\4\5\-\4\8\1\7\-\9\1\2\b\-\e\f\7\b\0\4\9\b\4\1\5\5 ]] 00:17:33.107 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:33.107 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:33.107 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:33.368 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2fc8e7d5-5296-40b1-aa99-43a9ad154948 == \2\f\c\8\e\7\d\5\-\5\2\9\6\-\4\0\b\1\-\a\a\9\9\-\4\3\a\9\a\d\1\5\4\9\4\8 ]] 00:17:33.368 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.368 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:33.627 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8931f9cd-6545-4817-912b-ef7b049b4155 00:17:33.627 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:33.627 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8931F9CD65454817912BEF7B049B4155 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8931F9CD65454817912BEF7B049B4155 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:33.628 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8931F9CD65454817912BEF7B049B4155 00:17:33.628 [2024-11-29 13:01:36.292572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:33.628 [2024-11-29 13:01:36.292599] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:33.628 [2024-11-29 13:01:36.292606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:33.628 request: 00:17:33.628 { 00:17:33.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.628 "namespace": { 00:17:33.628 "bdev_name": "invalid", 00:17:33.628 "nsid": 1, 00:17:33.628 "nguid": "8931F9CD65454817912BEF7B049B4155", 00:17:33.628 "no_auto_visible": false, 00:17:33.628 "hide_metadata": false 00:17:33.628 }, 00:17:33.628 "method": "nvmf_subsystem_add_ns", 00:17:33.628 "req_id": 1 00:17:33.628 } 00:17:33.628 Got JSON-RPC error response 00:17:33.628 response: 00:17:33.628 { 00:17:33.628 "code": -32602, 00:17:33.628 "message": "Invalid parameters" 00:17:33.628 } 00:17:33.887 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:33.887 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.887 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.887 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.887 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8931f9cd-6545-4817-912b-ef7b049b4155 00:17:33.887 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:33.887 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8931F9CD65454817912BEF7B049B4155 -i 00:17:33.887 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 867437 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 867437 ']' 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 867437 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 867437 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 867437' 00:17:36.428 killing process with pid 867437 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 867437 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 867437 00:17:36.428 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.428 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:36.428 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:36.428 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.428 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:36.428 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.428 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:36.429 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.689 rmmod nvme_tcp 00:17:36.689 rmmod nvme_fabrics 00:17:36.689 rmmod nvme_keyring 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 864938 ']' 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 864938 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 864938 ']' 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 864938 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 864938 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 864938' 00:17:36.689 killing process with pid 864938 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 864938 00:17:36.689 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 864938 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.950 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.861 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:38.862 00:17:38.862 real 0m28.142s 00:17:38.862 user 0m31.935s 00:17:38.862 sys 0m8.294s 00:17:38.862 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.862 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.862 ************************************ 00:17:38.862 END TEST nvmf_ns_masking 00:17:38.862 ************************************ 00:17:38.862 13:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:38.862 13:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:38.862 13:01:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.862 13:01:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.862 13:01:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.124 ************************************ 00:17:39.124 START TEST nvmf_nvme_cli 00:17:39.124 ************************************ 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:39.124 * Looking for test storage... 00:17:39.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:39.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.124 --rc genhtml_branch_coverage=1 00:17:39.124 --rc genhtml_function_coverage=1 00:17:39.124 --rc genhtml_legend=1 00:17:39.124 --rc geninfo_all_blocks=1 00:17:39.124 --rc geninfo_unexecuted_blocks=1 00:17:39.124 00:17:39.124 ' 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:39.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.124 --rc genhtml_branch_coverage=1 00:17:39.124 --rc genhtml_function_coverage=1 00:17:39.124 --rc genhtml_legend=1 00:17:39.124 --rc geninfo_all_blocks=1 00:17:39.124 --rc geninfo_unexecuted_blocks=1 00:17:39.124 00:17:39.124 ' 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:39.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.124 --rc genhtml_branch_coverage=1 00:17:39.124 --rc genhtml_function_coverage=1 00:17:39.124 --rc genhtml_legend=1 00:17:39.124 --rc geninfo_all_blocks=1 00:17:39.124 --rc geninfo_unexecuted_blocks=1 00:17:39.124 00:17:39.124 ' 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:39.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.124 --rc genhtml_branch_coverage=1 00:17:39.124 --rc genhtml_function_coverage=1 00:17:39.124 --rc genhtml_legend=1 00:17:39.124 --rc geninfo_all_blocks=1 00:17:39.124 --rc geninfo_unexecuted_blocks=1 00:17:39.124 00:17:39.124 ' 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.124 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.125 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:47.270 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:47.270 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:47.270 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:47.270 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.270 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.270 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.270 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:47.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:17:47.271 00:17:47.271 --- 10.0.0.2 ping statistics --- 00:17:47.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.271 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:17:47.271 00:17:47.271 --- 10.0.0.1 ping statistics --- 00:17:47.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.271 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=872843 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 872843 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 872843 ']' 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.271 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.271 [2024-11-29 13:01:49.371505] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:17:47.271 [2024-11-29 13:01:49.371598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.271 [2024-11-29 13:01:49.471966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.271 [2024-11-29 13:01:49.527187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.271 [2024-11-29 13:01:49.527241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.271 [2024-11-29 13:01:49.527250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.271 [2024-11-29 13:01:49.527257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.271 [2024-11-29 13:01:49.527263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.271 [2024-11-29 13:01:49.529235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.271 [2024-11-29 13:01:49.529398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.271 [2024-11-29 13:01:49.529561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.271 [2024-11-29 13:01:49.529561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.533 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.533 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:47.533 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:47.533 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.533 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.794 [2024-11-29 13:01:50.248219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.794 Malloc0 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.794 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.794 Malloc1 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.795 [2024-11-29 13:01:50.359213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.795 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:17:48.056 00:17:48.056 Discovery Log Number of Records 2, Generation counter 2 00:17:48.056 =====Discovery Log Entry 0====== 00:17:48.056 trtype: tcp 00:17:48.056 adrfam: ipv4 00:17:48.056 subtype: current discovery subsystem 00:17:48.056 treq: not required 00:17:48.056 portid: 0 00:17:48.056 trsvcid: 4420 00:17:48.056 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:48.056 traddr: 10.0.0.2 00:17:48.056 eflags: explicit discovery connections, duplicate discovery information 00:17:48.056 sectype: none 00:17:48.056 =====Discovery Log Entry 1====== 00:17:48.056 trtype: tcp 00:17:48.056 adrfam: ipv4 00:17:48.056 subtype: nvme subsystem 00:17:48.056 treq: not required 00:17:48.056 portid: 0 00:17:48.056 trsvcid: 4420 00:17:48.056 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:48.056 traddr: 10.0.0.2 00:17:48.056 eflags: none 00:17:48.056 sectype: none 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:48.056 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.969 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:49.969 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:49.969 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.969 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:49.969 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:49.969 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:51.884 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:51.884 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:51.885 /dev/nvme0n2 ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.885 rmmod nvme_tcp 00:17:51.885 rmmod nvme_fabrics 00:17:51.885 rmmod nvme_keyring 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 872843 ']' 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 872843 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 872843 ']' 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 872843 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 872843 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 872843' 00:17:51.885 killing process with pid 872843 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 872843 00:17:51.885 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 872843 00:17:52.146 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:52.146 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:52.146 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:52.146 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:52.146 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:52.146 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:52.146 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:52.146 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.147 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:52.147 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.147 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.147 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.058 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:54.058 00:17:54.058 real 0m15.164s 00:17:54.058 user 0m22.566s 00:17:54.058 sys 0m6.435s 00:17:54.058 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.058 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.058 ************************************ 00:17:54.058 END TEST nvmf_nvme_cli 00:17:54.058 ************************************ 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.320 ************************************ 00:17:54.320 START TEST nvmf_vfio_user 00:17:54.320 ************************************ 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:54.320 * Looking for test storage... 00:17:54.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:54.320 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:54.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.321 --rc genhtml_branch_coverage=1 00:17:54.321 --rc genhtml_function_coverage=1 00:17:54.321 --rc genhtml_legend=1 00:17:54.321 --rc geninfo_all_blocks=1 00:17:54.321 --rc geninfo_unexecuted_blocks=1 00:17:54.321 00:17:54.321 ' 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:54.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.321 --rc genhtml_branch_coverage=1 00:17:54.321 --rc genhtml_function_coverage=1 00:17:54.321 --rc genhtml_legend=1 00:17:54.321 --rc geninfo_all_blocks=1 00:17:54.321 --rc geninfo_unexecuted_blocks=1 00:17:54.321 00:17:54.321 ' 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:54.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.321 --rc genhtml_branch_coverage=1 00:17:54.321 --rc genhtml_function_coverage=1 00:17:54.321 --rc genhtml_legend=1 00:17:54.321 --rc geninfo_all_blocks=1 00:17:54.321 --rc geninfo_unexecuted_blocks=1 00:17:54.321 00:17:54.321 ' 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:54.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.321 --rc genhtml_branch_coverage=1 00:17:54.321 --rc genhtml_function_coverage=1 00:17:54.321 --rc genhtml_legend=1 00:17:54.321 --rc geninfo_all_blocks=1 00:17:54.321 --rc geninfo_unexecuted_blocks=1 00:17:54.321 00:17:54.321 ' 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.321 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:54.582 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.582 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=874644 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 874644' 00:17:54.583 Process pid: 874644 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 874644 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 874644 ']' 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.583 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:54.583 [2024-11-29 13:01:57.090795] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:17:54.583 [2024-11-29 13:01:57.090868] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.583 [2024-11-29 13:01:57.177389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.583 [2024-11-29 13:01:57.212167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.583 [2024-11-29 13:01:57.212198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.583 [2024-11-29 13:01:57.212204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.583 [2024-11-29 13:01:57.212209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.583 [2024-11-29 13:01:57.212214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.583 [2024-11-29 13:01:57.213547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.583 [2024-11-29 13:01:57.213702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.583 [2024-11-29 13:01:57.213820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.583 [2024-11-29 13:01:57.213823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.523 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.523 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:55.523 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:56.464 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:56.464 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:56.464 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:56.464 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:56.464 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:56.464 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:56.725 Malloc1 00:17:56.725 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:56.984 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:57.245 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:57.245 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:57.245 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:57.245 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:57.507 Malloc2 00:17:57.507 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:57.768 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:57.768 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:58.030 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:58.030 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:58.030 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:58.030 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:58.030 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:58.030 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:58.030 [2024-11-29 13:02:00.628963] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:17:58.030 [2024-11-29 13:02:00.629003] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875338 ] 00:17:58.030 [2024-11-29 13:02:00.669590] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:58.030 [2024-11-29 13:02:00.674918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:58.030 [2024-11-29 13:02:00.674937] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb4aa12c000 00:17:58.030 [2024-11-29 13:02:00.675910] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.030 [2024-11-29 13:02:00.676920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.030 [2024-11-29 13:02:00.677926] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.030 [2024-11-29 13:02:00.678931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:58.030 [2024-11-29 13:02:00.679935] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:58.030 [2024-11-29 13:02:00.680932] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.030 [2024-11-29 13:02:00.681939] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:58.030 [2024-11-29 13:02:00.682945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.030 [2024-11-29 13:02:00.683955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:58.030 [2024-11-29 13:02:00.683963] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb4aa121000 00:17:58.030 [2024-11-29 13:02:00.684880] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:58.030 [2024-11-29 13:02:00.694330] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:58.030 [2024-11-29 13:02:00.694354] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:58.030 [2024-11-29 13:02:00.700045] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:58.030 [2024-11-29 13:02:00.700078] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:58.030 [2024-11-29 13:02:00.700140] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:58.030 [2024-11-29 13:02:00.700153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:58.030 [2024-11-29 13:02:00.700157] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:58.030 [2024-11-29 13:02:00.701046] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:58.030 [2024-11-29 13:02:00.701054] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:58.030 [2024-11-29 13:02:00.701060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:58.030 [2024-11-29 13:02:00.702059] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:58.030 [2024-11-29 13:02:00.702065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:58.030 [2024-11-29 13:02:00.702070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:58.030 [2024-11-29 13:02:00.703062] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:58.030 [2024-11-29 13:02:00.703069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:58.030 [2024-11-29 13:02:00.704065] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:58.030 [2024-11-29 13:02:00.704072] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:58.030 [2024-11-29 13:02:00.704075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:58.030 [2024-11-29 13:02:00.704080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:58.030 [2024-11-29 13:02:00.704187] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:58.030 [2024-11-29 13:02:00.704191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:58.030 [2024-11-29 13:02:00.704195] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:58.030 [2024-11-29 13:02:00.705076] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:58.030 [2024-11-29 13:02:00.706078] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:58.030 [2024-11-29 13:02:00.707088] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:58.030 [2024-11-29 13:02:00.708089] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:58.030 [2024-11-29 13:02:00.708146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:58.292 [2024-11-29 13:02:00.709096] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:58.292 [2024-11-29 13:02:00.709103] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:58.292 [2024-11-29 13:02:00.709107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:58.292 [2024-11-29 13:02:00.709122] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:58.292 [2024-11-29 13:02:00.709130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:58.292 [2024-11-29 13:02:00.709143] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:58.292 [2024-11-29 13:02:00.709147] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.292 [2024-11-29 13:02:00.709149] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.292 [2024-11-29 13:02:00.709161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.292 [2024-11-29 13:02:00.709198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:58.292 [2024-11-29 13:02:00.709205] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:58.292 [2024-11-29 13:02:00.709208] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:58.292 [2024-11-29 13:02:00.709211] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:58.292 [2024-11-29 13:02:00.709215] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:58.292 [2024-11-29 13:02:00.709218] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:58.292 [2024-11-29 13:02:00.709221] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:58.292 [2024-11-29 13:02:00.709225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:58.292 [2024-11-29 13:02:00.709230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:58.292 [2024-11-29 13:02:00.709238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:58.292 [2024-11-29 13:02:00.709250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:58.292 [2024-11-29 13:02:00.709258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.292 [2024-11-29 13:02:00.709264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.292 [2024-11-29 13:02:00.709270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.292 [2024-11-29 13:02:00.709331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.292 [2024-11-29 13:02:00.709334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709362] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:58.293 [2024-11-29 13:02:00.709366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709446] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:58.293 [2024-11-29 13:02:00.709450] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:58.293 [2024-11-29 13:02:00.709452] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.293 [2024-11-29 13:02:00.709457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709474] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:58.293 [2024-11-29 13:02:00.709481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709491] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:58.293 [2024-11-29 13:02:00.709494] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.293 [2024-11-29 13:02:00.709497] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.293 [2024-11-29 13:02:00.709501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709540] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:58.293 [2024-11-29 13:02:00.709543] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.293 [2024-11-29 13:02:00.709546] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.293 [2024-11-29 13:02:00.709550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709593] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:58.293 [2024-11-29 13:02:00.709596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:58.293 [2024-11-29 13:02:00.709600] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:58.293 [2024-11-29 13:02:00.709613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709686] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:58.293 [2024-11-29 13:02:00.709689] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:58.293 [2024-11-29 13:02:00.709692] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:58.293 [2024-11-29 13:02:00.709694] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:58.293 [2024-11-29 13:02:00.709697] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:58.293 [2024-11-29 13:02:00.709701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:58.293 [2024-11-29 13:02:00.709707] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:58.293 [2024-11-29 13:02:00.709709] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:58.293 [2024-11-29 13:02:00.709712] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.293 [2024-11-29 13:02:00.709716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709721] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:58.293 [2024-11-29 13:02:00.709724] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.293 [2024-11-29 13:02:00.709727] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.293 [2024-11-29 13:02:00.709731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709736] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:58.293 [2024-11-29 13:02:00.709739] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:58.293 [2024-11-29 13:02:00.709742] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.293 [2024-11-29 13:02:00.709746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:58.293 [2024-11-29 13:02:00.709751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:58.293 [2024-11-29 13:02:00.709772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:58.293 ===================================================== 00:17:58.293 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:58.293 ===================================================== 00:17:58.293 Controller Capabilities/Features 00:17:58.293 ================================ 00:17:58.293 Vendor ID: 4e58 00:17:58.293 Subsystem Vendor ID: 4e58 00:17:58.293 Serial Number: SPDK1 00:17:58.293 Model Number: SPDK bdev Controller 00:17:58.293 Firmware Version: 25.01 00:17:58.293 Recommended Arb Burst: 6 00:17:58.293 IEEE OUI Identifier: 8d 6b 50 00:17:58.293 Multi-path I/O 00:17:58.293 May have multiple subsystem ports: Yes 00:17:58.293 May have multiple controllers: Yes 00:17:58.293 Associated with SR-IOV VF: No 00:17:58.293 Max Data Transfer Size: 131072 00:17:58.293 Max Number of Namespaces: 32 00:17:58.293 Max Number of I/O Queues: 127 00:17:58.293 NVMe Specification Version (VS): 1.3 00:17:58.293 NVMe Specification Version (Identify): 1.3 00:17:58.293 Maximum Queue Entries: 256 00:17:58.293 Contiguous Queues Required: Yes 00:17:58.293 Arbitration Mechanisms Supported 00:17:58.293 Weighted Round Robin: Not Supported 00:17:58.293 Vendor Specific: Not Supported 00:17:58.293 Reset Timeout: 15000 ms 00:17:58.293 Doorbell Stride: 4 bytes 00:17:58.293 NVM Subsystem Reset: Not Supported 00:17:58.293 Command Sets Supported 00:17:58.294 NVM Command Set: Supported 00:17:58.294 Boot Partition: Not Supported 00:17:58.294 Memory Page Size Minimum: 4096 bytes 00:17:58.294 Memory Page Size Maximum: 4096 bytes 00:17:58.294 Persistent Memory Region: Not Supported 00:17:58.294 Optional Asynchronous Events Supported 00:17:58.294 Namespace Attribute Notices: Supported 00:17:58.294 Firmware Activation Notices: Not Supported 00:17:58.294 ANA Change Notices: Not Supported 00:17:58.294 PLE Aggregate Log Change Notices: Not Supported 00:17:58.294 LBA Status Info Alert Notices: Not Supported 00:17:58.294 EGE Aggregate Log Change Notices: Not Supported 00:17:58.294 Normal NVM Subsystem Shutdown event: Not Supported 00:17:58.294 Zone Descriptor Change Notices: Not Supported 00:17:58.294 Discovery Log Change Notices: Not Supported 00:17:58.294 Controller Attributes 00:17:58.294 128-bit Host Identifier: Supported 00:17:58.294 Non-Operational Permissive Mode: Not Supported 00:17:58.294 NVM Sets: Not Supported 00:17:58.294 Read Recovery Levels: Not Supported 00:17:58.294 Endurance Groups: Not Supported 00:17:58.294 Predictable Latency Mode: Not Supported 00:17:58.294 Traffic Based Keep ALive: Not Supported 00:17:58.294 Namespace Granularity: Not Supported 00:17:58.294 SQ Associations: Not Supported 00:17:58.294 UUID List: Not Supported 00:17:58.294 Multi-Domain Subsystem: Not Supported 00:17:58.294 Fixed Capacity Management: Not Supported 00:17:58.294 Variable Capacity Management: Not Supported 00:17:58.294 Delete Endurance Group: Not Supported 00:17:58.294 Delete NVM Set: Not Supported 00:17:58.294 Extended LBA Formats Supported: Not Supported 00:17:58.294 Flexible Data Placement Supported: Not Supported 00:17:58.294 00:17:58.294 Controller Memory Buffer Support 00:17:58.294 ================================ 00:17:58.294 Supported: No 00:17:58.294 00:17:58.294 Persistent Memory Region Support 00:17:58.294 ================================ 00:17:58.294 Supported: No 00:17:58.294 00:17:58.294 Admin Command Set Attributes 00:17:58.294 ============================ 00:17:58.294 Security Send/Receive: Not Supported 00:17:58.294 Format NVM: Not Supported 00:17:58.294 Firmware Activate/Download: Not Supported 00:17:58.294 Namespace Management: Not Supported 00:17:58.294 Device Self-Test: Not Supported 00:17:58.294 Directives: Not Supported 00:17:58.294 NVMe-MI: Not Supported 00:17:58.294 Virtualization Management: Not Supported 00:17:58.294 Doorbell Buffer Config: Not Supported 00:17:58.294 Get LBA Status Capability: Not Supported 00:17:58.294 Command & Feature Lockdown Capability: Not Supported 00:17:58.294 Abort Command Limit: 4 00:17:58.294 Async Event Request Limit: 4 00:17:58.294 Number of Firmware Slots: N/A 00:17:58.294 Firmware Slot 1 Read-Only: N/A 00:17:58.294 Firmware Activation Without Reset: N/A 00:17:58.294 Multiple Update Detection Support: N/A 00:17:58.294 Firmware Update Granularity: No Information Provided 00:17:58.294 Per-Namespace SMART Log: No 00:17:58.294 Asymmetric Namespace Access Log Page: Not Supported 00:17:58.294 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:58.294 Command Effects Log Page: Supported 00:17:58.294 Get Log Page Extended Data: Supported 00:17:58.294 Telemetry Log Pages: Not Supported 00:17:58.294 Persistent Event Log Pages: Not Supported 00:17:58.294 Supported Log Pages Log Page: May Support 00:17:58.294 Commands Supported & Effects Log Page: Not Supported 00:17:58.294 Feature Identifiers & Effects Log Page:May Support 00:17:58.294 NVMe-MI Commands & Effects Log Page: May Support 00:17:58.294 Data Area 4 for Telemetry Log: Not Supported 00:17:58.294 Error Log Page Entries Supported: 128 00:17:58.294 Keep Alive: Supported 00:17:58.294 Keep Alive Granularity: 10000 ms 00:17:58.294 00:17:58.294 NVM Command Set Attributes 00:17:58.294 ========================== 00:17:58.294 Submission Queue Entry Size 00:17:58.294 Max: 64 00:17:58.294 Min: 64 00:17:58.294 Completion Queue Entry Size 00:17:58.294 Max: 16 00:17:58.294 Min: 16 00:17:58.294 Number of Namespaces: 32 00:17:58.294 Compare Command: Supported 00:17:58.294 Write Uncorrectable Command: Not Supported 00:17:58.294 Dataset Management Command: Supported 00:17:58.294 Write Zeroes Command: Supported 00:17:58.294 Set Features Save Field: Not Supported 00:17:58.294 Reservations: Not Supported 00:17:58.294 Timestamp: Not Supported 00:17:58.294 Copy: Supported 00:17:58.294 Volatile Write Cache: Present 00:17:58.294 Atomic Write Unit (Normal): 1 00:17:58.294 Atomic Write Unit (PFail): 1 00:17:58.294 Atomic Compare & Write Unit: 1 00:17:58.294 Fused Compare & Write: Supported 00:17:58.294 Scatter-Gather List 00:17:58.294 SGL Command Set: Supported (Dword aligned) 00:17:58.294 SGL Keyed: Not Supported 00:17:58.294 SGL Bit Bucket Descriptor: Not Supported 00:17:58.294 SGL Metadata Pointer: Not Supported 00:17:58.294 Oversized SGL: Not Supported 00:17:58.294 SGL Metadata Address: Not Supported 00:17:58.294 SGL Offset: Not Supported 00:17:58.294 Transport SGL Data Block: Not Supported 00:17:58.294 Replay Protected Memory Block: Not Supported 00:17:58.294 00:17:58.294 Firmware Slot Information 00:17:58.294 ========================= 00:17:58.294 Active slot: 1 00:17:58.294 Slot 1 Firmware Revision: 25.01 00:17:58.294 00:17:58.294 00:17:58.294 Commands Supported and Effects 00:17:58.294 ============================== 00:17:58.294 Admin Commands 00:17:58.294 -------------- 00:17:58.294 Get Log Page (02h): Supported 00:17:58.294 Identify (06h): Supported 00:17:58.294 Abort (08h): Supported 00:17:58.294 Set Features (09h): Supported 00:17:58.294 Get Features (0Ah): Supported 00:17:58.294 Asynchronous Event Request (0Ch): Supported 00:17:58.294 Keep Alive (18h): Supported 00:17:58.294 I/O Commands 00:17:58.294 ------------ 00:17:58.294 Flush (00h): Supported LBA-Change 00:17:58.294 Write (01h): Supported LBA-Change 00:17:58.294 Read (02h): Supported 00:17:58.294 Compare (05h): Supported 00:17:58.294 Write Zeroes (08h): Supported LBA-Change 00:17:58.294 Dataset Management (09h): Supported LBA-Change 00:17:58.294 Copy (19h): Supported LBA-Change 00:17:58.294 00:17:58.294 Error Log 00:17:58.294 ========= 00:17:58.294 00:17:58.294 Arbitration 00:17:58.294 =========== 00:17:58.294 Arbitration Burst: 1 00:17:58.294 00:17:58.294 Power Management 00:17:58.294 ================ 00:17:58.294 Number of Power States: 1 00:17:58.294 Current Power State: Power State #0 00:17:58.294 Power State #0: 00:17:58.294 Max Power: 0.00 W 00:17:58.294 Non-Operational State: Operational 00:17:58.294 Entry Latency: Not Reported 00:17:58.294 Exit Latency: Not Reported 00:17:58.294 Relative Read Throughput: 0 00:17:58.294 Relative Read Latency: 0 00:17:58.294 Relative Write Throughput: 0 00:17:58.294 Relative Write Latency: 0 00:17:58.294 Idle Power: Not Reported 00:17:58.294 Active Power: Not Reported 00:17:58.294 Non-Operational Permissive Mode: Not Supported 00:17:58.294 00:17:58.294 Health Information 00:17:58.294 ================== 00:17:58.294 Critical Warnings: 00:17:58.294 Available Spare Space: OK 00:17:58.294 Temperature: OK 00:17:58.294 Device Reliability: OK 00:17:58.294 Read Only: No 00:17:58.294 Volatile Memory Backup: OK 00:17:58.294 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:58.294 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:58.294 Available Spare: 0% 00:17:58.294 Available Sp[2024-11-29 13:02:00.709844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:58.294 [2024-11-29 13:02:00.709850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:58.294 [2024-11-29 13:02:00.709872] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:58.294 [2024-11-29 13:02:00.709878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.294 [2024-11-29 13:02:00.709883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.294 [2024-11-29 13:02:00.709887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.294 [2024-11-29 13:02:00.709892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.294 [2024-11-29 13:02:00.710104] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:58.294 [2024-11-29 13:02:00.710112] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:58.295 [2024-11-29 13:02:00.711109] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:58.295 [2024-11-29 13:02:00.711148] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:58.295 [2024-11-29 13:02:00.711152] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:58.295 [2024-11-29 13:02:00.712117] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:58.295 [2024-11-29 13:02:00.712125] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:58.295 [2024-11-29 13:02:00.712179] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:58.295 [2024-11-29 13:02:00.714166] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:58.295 are Threshold: 0% 00:17:58.295 Life Percentage Used: 0% 00:17:58.295 Data Units Read: 0 00:17:58.295 Data Units Written: 0 00:17:58.295 Host Read Commands: 0 00:17:58.295 Host Write Commands: 0 00:17:58.295 Controller Busy Time: 0 minutes 00:17:58.295 Power Cycles: 0 00:17:58.295 Power On Hours: 0 hours 00:17:58.295 Unsafe Shutdowns: 0 00:17:58.295 Unrecoverable Media Errors: 0 00:17:58.295 Lifetime Error Log Entries: 0 00:17:58.295 Warning Temperature Time: 0 minutes 00:17:58.295 Critical Temperature Time: 0 minutes 00:17:58.295 00:17:58.295 Number of Queues 00:17:58.295 ================ 00:17:58.295 Number of I/O Submission Queues: 127 00:17:58.295 Number of I/O Completion Queues: 127 00:17:58.295 00:17:58.295 Active Namespaces 00:17:58.295 ================= 00:17:58.295 Namespace ID:1 00:17:58.295 Error Recovery Timeout: Unlimited 00:17:58.295 Command Set Identifier: NVM (00h) 00:17:58.295 Deallocate: Supported 00:17:58.295 Deallocated/Unwritten Error: Not Supported 00:17:58.295 Deallocated Read Value: Unknown 00:17:58.295 Deallocate in Write Zeroes: Not Supported 00:17:58.295 Deallocated Guard Field: 0xFFFF 00:17:58.295 Flush: Supported 00:17:58.295 Reservation: Supported 00:17:58.295 Namespace Sharing Capabilities: Multiple Controllers 00:17:58.295 Size (in LBAs): 131072 (0GiB) 00:17:58.295 Capacity (in LBAs): 131072 (0GiB) 00:17:58.295 Utilization (in LBAs): 131072 (0GiB) 00:17:58.295 NGUID: DB4BE8EE561C4F17B702D9044A5FDE2F 00:17:58.295 UUID: db4be8ee-561c-4f17-b702-d9044a5fde2f 00:17:58.295 Thin Provisioning: Not Supported 00:17:58.295 Per-NS Atomic Units: Yes 00:17:58.295 Atomic Boundary Size (Normal): 0 00:17:58.295 Atomic Boundary Size (PFail): 0 00:17:58.295 Atomic Boundary Offset: 0 00:17:58.295 Maximum Single Source Range Length: 65535 00:17:58.295 Maximum Copy Length: 65535 00:17:58.295 Maximum Source Range Count: 1 00:17:58.295 NGUID/EUI64 Never Reused: No 00:17:58.295 Namespace Write Protected: No 00:17:58.295 Number of LBA Formats: 1 00:17:58.295 Current LBA Format: LBA Format #00 00:17:58.295 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:58.295 00:17:58.295 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:58.295 [2024-11-29 13:02:00.901850] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:03.610 Initializing NVMe Controllers 00:18:03.610 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.610 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:03.610 Initialization complete. Launching workers. 00:18:03.610 ======================================================== 00:18:03.610 Latency(us) 00:18:03.610 Device Information : IOPS MiB/s Average min max 00:18:03.610 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39947.29 156.04 3203.89 866.36 6781.11 00:18:03.610 ======================================================== 00:18:03.610 Total : 39947.29 156.04 3203.89 866.36 6781.11 00:18:03.610 00:18:03.610 [2024-11-29 13:02:05.920455] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:03.610 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:03.610 [2024-11-29 13:02:06.111302] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.901 Initializing NVMe Controllers 00:18:08.901 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:08.901 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:08.901 Initialization complete. Launching workers. 00:18:08.901 ======================================================== 00:18:08.901 Latency(us) 00:18:08.901 Device Information : IOPS MiB/s Average min max 00:18:08.901 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.72 5985.61 10974.19 00:18:08.901 ======================================================== 00:18:08.901 Total : 16051.20 62.70 7980.72 5985.61 10974.19 00:18:08.901 00:18:08.901 [2024-11-29 13:02:11.148360] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.901 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:08.901 [2024-11-29 13:02:11.361254] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:14.186 [2024-11-29 13:02:16.424298] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:14.186 Initializing NVMe Controllers 00:18:14.186 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:14.186 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:14.186 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:14.186 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:14.186 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:14.186 Initialization complete. Launching workers. 00:18:14.186 Starting thread on core 2 00:18:14.186 Starting thread on core 3 00:18:14.186 Starting thread on core 1 00:18:14.186 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:14.186 [2024-11-29 13:02:16.673213] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.489 [2024-11-29 13:02:19.727424] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.489 Initializing NVMe Controllers 00:18:17.489 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.489 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.489 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:17.489 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:17.489 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:17.489 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:17.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:17.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:17.489 Initialization complete. Launching workers. 00:18:17.489 Starting thread on core 1 with urgent priority queue 00:18:17.489 Starting thread on core 2 with urgent priority queue 00:18:17.489 Starting thread on core 3 with urgent priority queue 00:18:17.489 Starting thread on core 0 with urgent priority queue 00:18:17.489 SPDK bdev Controller (SPDK1 ) core 0: 12458.67 IO/s 8.03 secs/100000 ios 00:18:17.489 SPDK bdev Controller (SPDK1 ) core 1: 9189.33 IO/s 10.88 secs/100000 ios 00:18:17.489 SPDK bdev Controller (SPDK1 ) core 2: 14236.00 IO/s 7.02 secs/100000 ios 00:18:17.489 SPDK bdev Controller (SPDK1 ) core 3: 8270.67 IO/s 12.09 secs/100000 ios 00:18:17.489 ======================================================== 00:18:17.489 00:18:17.489 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:17.489 [2024-11-29 13:02:19.965181] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.489 Initializing NVMe Controllers 00:18:17.489 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.489 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:17.489 Namespace ID: 1 size: 0GB 00:18:17.489 Initialization complete. 00:18:17.489 INFO: using host memory buffer for IO 00:18:17.489 Hello world! 00:18:17.489 [2024-11-29 13:02:20.000382] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.489 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:17.751 [2024-11-29 13:02:20.239569] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:18.697 Initializing NVMe Controllers 00:18:18.697 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:18.697 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:18.697 Initialization complete. Launching workers. 00:18:18.697 submit (in ns) avg, min, max = 6573.7, 2829.2, 3998233.3 00:18:18.697 complete (in ns) avg, min, max = 16204.4, 1635.0, 6989881.7 00:18:18.697 00:18:18.697 Submit histogram 00:18:18.697 ================ 00:18:18.697 Range in us Cumulative Count 00:18:18.697 2.827 - 2.840: 0.1449% ( 29) 00:18:18.697 2.840 - 2.853: 0.8746% ( 146) 00:18:18.697 2.853 - 2.867: 3.4185% ( 509) 00:18:18.697 2.867 - 2.880: 7.8515% ( 887) 00:18:18.697 2.880 - 2.893: 13.3240% ( 1095) 00:18:18.697 2.893 - 2.907: 19.0664% ( 1149) 00:18:18.697 2.907 - 2.920: 25.4186% ( 1271) 00:18:18.697 2.920 - 2.933: 31.5008% ( 1217) 00:18:18.697 2.933 - 2.947: 36.8834% ( 1077) 00:18:18.697 2.947 - 2.960: 41.8612% ( 996) 00:18:18.697 2.960 - 2.973: 46.6890% ( 966) 00:18:18.697 2.973 - 2.987: 53.2111% ( 1305) 00:18:18.697 2.987 - 3.000: 61.9771% ( 1754) 00:18:18.697 3.000 - 3.013: 72.2175% ( 2049) 00:18:18.697 3.013 - 3.027: 81.1885% ( 1795) 00:18:18.697 3.027 - 3.040: 87.5106% ( 1265) 00:18:18.697 3.040 - 3.053: 92.5983% ( 1018) 00:18:18.697 3.053 - 3.067: 95.9918% ( 679) 00:18:18.697 3.067 - 3.080: 97.4611% ( 294) 00:18:18.697 3.080 - 3.093: 98.5507% ( 218) 00:18:18.697 3.093 - 3.107: 99.0304% ( 96) 00:18:18.697 3.107 - 3.120: 99.2803% ( 50) 00:18:18.697 3.120 - 3.133: 99.4203% ( 28) 00:18:18.697 3.133 - 3.147: 99.4452% ( 5) 00:18:18.697 3.147 - 3.160: 99.4752% ( 6) 00:18:18.697 3.160 - 3.173: 99.4902% ( 3) 00:18:18.697 3.187 - 3.200: 99.4952% ( 1) 00:18:18.697 3.413 - 3.440: 99.5102% ( 3) 00:18:18.697 3.493 - 3.520: 99.5152% ( 1) 00:18:18.697 3.547 - 3.573: 99.5202% ( 1) 00:18:18.697 3.627 - 3.653: 99.5252% ( 1) 00:18:18.697 3.760 - 3.787: 99.5302% ( 1) 00:18:18.697 3.920 - 3.947: 99.5352% ( 1) 00:18:18.697 3.947 - 3.973: 99.5402% ( 1) 00:18:18.697 4.000 - 4.027: 99.5452% ( 1) 00:18:18.697 4.027 - 4.053: 99.5502% ( 1) 00:18:18.697 4.240 - 4.267: 99.5552% ( 1) 00:18:18.697 4.293 - 4.320: 99.5602% ( 1) 00:18:18.697 4.373 - 4.400: 99.5702% ( 2) 00:18:18.697 4.400 - 4.427: 99.5752% ( 1) 00:18:18.697 4.480 - 4.507: 99.5802% ( 1) 00:18:18.697 4.507 - 4.533: 99.5852% ( 1) 00:18:18.697 4.640 - 4.667: 99.5902% ( 1) 00:18:18.697 4.693 - 4.720: 99.5952% ( 1) 00:18:18.697 4.907 - 4.933: 99.6102% ( 3) 00:18:18.697 4.987 - 5.013: 99.6202% ( 2) 00:18:18.697 5.013 - 5.040: 99.6252% ( 1) 00:18:18.697 5.040 - 5.067: 99.6302% ( 1) 00:18:18.697 5.067 - 5.093: 99.6452% ( 3) 00:18:18.697 5.093 - 5.120: 99.6502% ( 1) 00:18:18.697 5.120 - 5.147: 99.6552% ( 1) 00:18:18.697 5.147 - 5.173: 99.6602% ( 1) 00:18:18.697 5.200 - 5.227: 99.6701% ( 2) 00:18:18.697 5.253 - 5.280: 99.6751% ( 1) 00:18:18.697 5.307 - 5.333: 99.6801% ( 1) 00:18:18.697 5.360 - 5.387: 99.6851% ( 1) 00:18:18.697 5.413 - 5.440: 99.6901% ( 1) 00:18:18.697 5.467 - 5.493: 99.7051% ( 3) 00:18:18.697 5.493 - 5.520: 99.7101% ( 1) 00:18:18.697 5.520 - 5.547: 99.7201% ( 2) 00:18:18.697 5.547 - 5.573: 99.7301% ( 2) 00:18:18.697 5.573 - 5.600: 99.7351% ( 1) 00:18:18.697 5.600 - 5.627: 99.7401% ( 1) 00:18:18.697 5.627 - 5.653: 99.7451% ( 1) 00:18:18.697 5.653 - 5.680: 99.7551% ( 2) 00:18:18.697 5.707 - 5.733: 99.7651% ( 2) 00:18:18.697 5.733 - 5.760: 99.7701% ( 1) 00:18:18.697 5.787 - 5.813: 99.7751% ( 1) 00:18:18.697 5.840 - 5.867: 99.7851% ( 2) 00:18:18.697 5.867 - 5.893: 99.7901% ( 1) 00:18:18.697 5.893 - 5.920: 99.7951% ( 1) 00:18:18.697 5.920 - 5.947: 99.8051% ( 2) 00:18:18.697 6.053 - 6.080: 99.8201% ( 3) 00:18:18.697 6.213 - 6.240: 99.8301% ( 2) 00:18:18.697 6.240 - 6.267: 99.8351% ( 1) 00:18:18.697 6.267 - 6.293: 99.8401% ( 1) 00:18:18.697 6.320 - 6.347: 99.8501% ( 2) 00:18:18.697 6.373 - 6.400: 99.8551% ( 1) 00:18:18.697 6.427 - 6.453: 99.8601% ( 1) 00:18:18.697 [2024-11-29 13:02:21.255131] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:18.697 6.453 - 6.480: 99.8651% ( 1) 00:18:18.697 6.480 - 6.507: 99.8701% ( 1) 00:18:18.697 6.560 - 6.587: 99.8751% ( 1) 00:18:18.697 6.613 - 6.640: 99.8801% ( 1) 00:18:18.697 6.720 - 6.747: 99.8851% ( 1) 00:18:18.697 6.747 - 6.773: 99.8900% ( 1) 00:18:18.697 7.147 - 7.200: 99.8950% ( 1) 00:18:18.697 8.267 - 8.320: 99.9000% ( 1) 00:18:18.697 11.413 - 11.467: 99.9050% ( 1) 00:18:18.697 12.213 - 12.267: 99.9100% ( 1) 00:18:18.697 3986.773 - 4014.080: 100.0000% ( 18) 00:18:18.697 00:18:18.697 Complete histogram 00:18:18.697 ================== 00:18:18.697 Range in us Cumulative Count 00:18:18.697 1.633 - 1.640: 0.0050% ( 1) 00:18:18.697 1.640 - 1.647: 0.6997% ( 139) 00:18:18.697 1.647 - 1.653: 1.0595% ( 72) 00:18:18.697 1.653 - 1.660: 1.1395% ( 16) 00:18:18.697 1.660 - 1.667: 1.2045% ( 13) 00:18:18.697 1.667 - 1.673: 1.2744% ( 14) 00:18:18.697 1.673 - 1.680: 1.3194% ( 9) 00:18:18.697 1.680 - 1.687: 1.3394% ( 4) 00:18:18.697 1.687 - 1.693: 1.3444% ( 1) 00:18:18.697 1.693 - 1.700: 20.4758% ( 3828) 00:18:18.697 1.700 - 1.707: 51.5968% ( 6227) 00:18:18.697 1.707 - 1.720: 71.5578% ( 3994) 00:18:18.697 1.720 - 1.733: 81.5483% ( 1999) 00:18:18.697 1.733 - 1.747: 84.0672% ( 504) 00:18:18.697 1.747 - 1.760: 85.5715% ( 301) 00:18:18.697 1.760 - 1.773: 91.0840% ( 1103) 00:18:18.697 1.773 - 1.787: 96.0418% ( 992) 00:18:18.697 1.787 - 1.800: 98.4007% ( 472) 00:18:18.697 1.800 - 1.813: 99.2653% ( 173) 00:18:18.697 1.813 - 1.827: 99.4752% ( 42) 00:18:18.697 1.827 - 1.840: 99.4902% ( 3) 00:18:18.697 3.373 - 3.387: 99.4952% ( 1) 00:18:18.697 3.680 - 3.707: 99.5002% ( 1) 00:18:18.697 3.840 - 3.867: 99.5052% ( 1) 00:18:18.697 4.160 - 4.187: 99.5202% ( 3) 00:18:18.697 4.347 - 4.373: 99.5302% ( 2) 00:18:18.697 4.427 - 4.453: 99.5352% ( 1) 00:18:18.697 4.480 - 4.507: 99.5402% ( 1) 00:18:18.697 4.640 - 4.667: 99.5452% ( 1) 00:18:18.697 4.667 - 4.693: 99.5502% ( 1) 00:18:18.697 4.693 - 4.720: 99.5552% ( 1) 00:18:18.697 4.773 - 4.800: 99.5602% ( 1) 00:18:18.697 4.853 - 4.880: 99.5652% ( 1) 00:18:18.697 4.987 - 5.013: 99.5702% ( 1) 00:18:18.697 5.013 - 5.040: 99.5752% ( 1) 00:18:18.697 5.040 - 5.067: 99.5802% ( 1) 00:18:18.697 5.147 - 5.173: 99.5852% ( 1) 00:18:18.697 5.307 - 5.333: 99.6002% ( 3) 00:18:18.697 5.573 - 5.600: 99.6052% ( 1) 00:18:18.698 5.627 - 5.653: 99.6102% ( 1) 00:18:18.698 5.893 - 5.920: 99.6152% ( 1) 00:18:18.698 6.080 - 6.107: 99.6202% ( 1) 00:18:18.698 8.320 - 8.373: 99.6252% ( 1) 00:18:18.698 8.693 - 8.747: 99.6302% ( 1) 00:18:18.698 12.853 - 12.907: 99.6352% ( 1) 00:18:18.698 14.080 - 14.187: 99.6402% ( 1) 00:18:18.698 3153.920 - 3167.573: 99.6452% ( 1) 00:18:18.698 3986.773 - 4014.080: 99.9950% ( 70) 00:18:18.698 6963.200 - 6990.507: 100.0000% ( 1) 00:18:18.698 00:18:18.698 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:18.698 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:18.698 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:18.698 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:18.698 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:18.958 [ 00:18:18.958 { 00:18:18.958 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:18.958 "subtype": "Discovery", 00:18:18.958 "listen_addresses": [], 00:18:18.958 "allow_any_host": true, 00:18:18.958 "hosts": [] 00:18:18.958 }, 00:18:18.958 { 00:18:18.958 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:18.958 "subtype": "NVMe", 00:18:18.958 "listen_addresses": [ 00:18:18.958 { 00:18:18.958 "trtype": "VFIOUSER", 00:18:18.958 "adrfam": "IPv4", 00:18:18.958 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:18.958 "trsvcid": "0" 00:18:18.958 } 00:18:18.958 ], 00:18:18.958 "allow_any_host": true, 00:18:18.958 "hosts": [], 00:18:18.958 "serial_number": "SPDK1", 00:18:18.958 "model_number": "SPDK bdev Controller", 00:18:18.958 "max_namespaces": 32, 00:18:18.958 "min_cntlid": 1, 00:18:18.958 "max_cntlid": 65519, 00:18:18.958 "namespaces": [ 00:18:18.958 { 00:18:18.958 "nsid": 1, 00:18:18.958 "bdev_name": "Malloc1", 00:18:18.958 "name": "Malloc1", 00:18:18.959 "nguid": "DB4BE8EE561C4F17B702D9044A5FDE2F", 00:18:18.959 "uuid": "db4be8ee-561c-4f17-b702-d9044a5fde2f" 00:18:18.959 } 00:18:18.959 ] 00:18:18.959 }, 00:18:18.959 { 00:18:18.959 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:18.959 "subtype": "NVMe", 00:18:18.959 "listen_addresses": [ 00:18:18.959 { 00:18:18.959 "trtype": "VFIOUSER", 00:18:18.959 "adrfam": "IPv4", 00:18:18.959 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:18.959 "trsvcid": "0" 00:18:18.959 } 00:18:18.959 ], 00:18:18.959 "allow_any_host": true, 00:18:18.959 "hosts": [], 00:18:18.959 "serial_number": "SPDK2", 00:18:18.959 "model_number": "SPDK bdev Controller", 00:18:18.959 "max_namespaces": 32, 00:18:18.959 "min_cntlid": 1, 00:18:18.959 "max_cntlid": 65519, 00:18:18.959 "namespaces": [ 00:18:18.959 { 00:18:18.959 "nsid": 1, 00:18:18.959 "bdev_name": "Malloc2", 00:18:18.959 "name": "Malloc2", 00:18:18.959 "nguid": "9E277C04CA064B40BB97E1E0E7AE0794", 00:18:18.959 "uuid": "9e277c04-ca06-4b40-bb97-e1e0e7ae0794" 00:18:18.959 } 00:18:18.959 ] 00:18:18.959 } 00:18:18.959 ] 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=879369 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:18.959 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:18.959 [2024-11-29 13:02:21.635513] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:19.220 Malloc3 00:18:19.220 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:19.220 [2024-11-29 13:02:21.839829] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:19.220 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:19.220 Asynchronous Event Request test 00:18:19.220 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:19.220 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:19.220 Registering asynchronous event callbacks... 00:18:19.220 Starting namespace attribute notice tests for all controllers... 00:18:19.220 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:19.220 aer_cb - Changed Namespace 00:18:19.220 Cleaning up... 00:18:19.482 [ 00:18:19.482 { 00:18:19.482 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:19.482 "subtype": "Discovery", 00:18:19.482 "listen_addresses": [], 00:18:19.482 "allow_any_host": true, 00:18:19.482 "hosts": [] 00:18:19.482 }, 00:18:19.482 { 00:18:19.482 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:19.482 "subtype": "NVMe", 00:18:19.482 "listen_addresses": [ 00:18:19.482 { 00:18:19.482 "trtype": "VFIOUSER", 00:18:19.482 "adrfam": "IPv4", 00:18:19.482 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:19.482 "trsvcid": "0" 00:18:19.482 } 00:18:19.482 ], 00:18:19.482 "allow_any_host": true, 00:18:19.482 "hosts": [], 00:18:19.482 "serial_number": "SPDK1", 00:18:19.482 "model_number": "SPDK bdev Controller", 00:18:19.482 "max_namespaces": 32, 00:18:19.482 "min_cntlid": 1, 00:18:19.482 "max_cntlid": 65519, 00:18:19.482 "namespaces": [ 00:18:19.482 { 00:18:19.482 "nsid": 1, 00:18:19.482 "bdev_name": "Malloc1", 00:18:19.482 "name": "Malloc1", 00:18:19.482 "nguid": "DB4BE8EE561C4F17B702D9044A5FDE2F", 00:18:19.482 "uuid": "db4be8ee-561c-4f17-b702-d9044a5fde2f" 00:18:19.482 }, 00:18:19.482 { 00:18:19.482 "nsid": 2, 00:18:19.482 "bdev_name": "Malloc3", 00:18:19.482 "name": "Malloc3", 00:18:19.482 "nguid": "55BCE824D6AF4790A4664D75C932F715", 00:18:19.482 "uuid": "55bce824-d6af-4790-a466-4d75c932f715" 00:18:19.482 } 00:18:19.482 ] 00:18:19.482 }, 00:18:19.482 { 00:18:19.482 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:19.482 "subtype": "NVMe", 00:18:19.482 "listen_addresses": [ 00:18:19.482 { 00:18:19.482 "trtype": "VFIOUSER", 00:18:19.482 "adrfam": "IPv4", 00:18:19.482 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:19.482 "trsvcid": "0" 00:18:19.482 } 00:18:19.482 ], 00:18:19.482 "allow_any_host": true, 00:18:19.482 "hosts": [], 00:18:19.482 "serial_number": "SPDK2", 00:18:19.482 "model_number": "SPDK bdev Controller", 00:18:19.482 "max_namespaces": 32, 00:18:19.482 "min_cntlid": 1, 00:18:19.482 "max_cntlid": 65519, 00:18:19.482 "namespaces": [ 00:18:19.482 { 00:18:19.482 "nsid": 1, 00:18:19.482 "bdev_name": "Malloc2", 00:18:19.482 "name": "Malloc2", 00:18:19.482 "nguid": "9E277C04CA064B40BB97E1E0E7AE0794", 00:18:19.482 "uuid": "9e277c04-ca06-4b40-bb97-e1e0e7ae0794" 00:18:19.482 } 00:18:19.482 ] 00:18:19.482 } 00:18:19.482 ] 00:18:19.482 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 879369 00:18:19.482 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:19.482 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:19.482 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:19.482 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:19.482 [2024-11-29 13:02:22.070856] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:18:19.482 [2024-11-29 13:02:22.070899] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879381 ] 00:18:19.482 [2024-11-29 13:02:22.110361] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:19.482 [2024-11-29 13:02:22.119352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:19.482 [2024-11-29 13:02:22.119372] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efd7f082000 00:18:19.482 [2024-11-29 13:02:22.120356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.482 [2024-11-29 13:02:22.121360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.482 [2024-11-29 13:02:22.122364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.482 [2024-11-29 13:02:22.123377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:19.482 [2024-11-29 13:02:22.124387] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:19.482 [2024-11-29 13:02:22.125394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.482 [2024-11-29 13:02:22.126401] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:19.482 [2024-11-29 13:02:22.127412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.482 [2024-11-29 13:02:22.128422] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:19.482 [2024-11-29 13:02:22.128430] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efd7f077000 00:18:19.482 [2024-11-29 13:02:22.129342] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:19.482 [2024-11-29 13:02:22.142722] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:19.482 [2024-11-29 13:02:22.142741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:19.482 [2024-11-29 13:02:22.144792] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:19.482 [2024-11-29 13:02:22.144823] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:19.482 [2024-11-29 13:02:22.144883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:19.482 [2024-11-29 13:02:22.144894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:19.482 [2024-11-29 13:02:22.144899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:19.482 [2024-11-29 13:02:22.145798] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:19.482 [2024-11-29 13:02:22.145808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:19.482 [2024-11-29 13:02:22.145814] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:19.482 [2024-11-29 13:02:22.146801] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:19.482 [2024-11-29 13:02:22.146808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:19.482 [2024-11-29 13:02:22.146813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:19.482 [2024-11-29 13:02:22.147805] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:19.482 [2024-11-29 13:02:22.147811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:19.482 [2024-11-29 13:02:22.148814] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:19.482 [2024-11-29 13:02:22.148821] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:19.482 [2024-11-29 13:02:22.148824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:19.482 [2024-11-29 13:02:22.148829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:19.482 [2024-11-29 13:02:22.148936] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:19.482 [2024-11-29 13:02:22.148939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:19.482 [2024-11-29 13:02:22.148943] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:19.482 [2024-11-29 13:02:22.149821] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:19.482 [2024-11-29 13:02:22.150829] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:19.482 [2024-11-29 13:02:22.151835] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:19.482 [2024-11-29 13:02:22.152838] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:19.483 [2024-11-29 13:02:22.152868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:19.483 [2024-11-29 13:02:22.153843] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:19.483 [2024-11-29 13:02:22.153849] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:19.483 [2024-11-29 13:02:22.153853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:19.483 [2024-11-29 13:02:22.153869] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:19.483 [2024-11-29 13:02:22.153875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:19.483 [2024-11-29 13:02:22.153886] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:19.483 [2024-11-29 13:02:22.153890] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:19.483 [2024-11-29 13:02:22.153893] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.483 [2024-11-29 13:02:22.153901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:19.483 [2024-11-29 13:02:22.160166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:19.483 [2024-11-29 13:02:22.160176] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:19.483 [2024-11-29 13:02:22.160180] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:19.483 [2024-11-29 13:02:22.160183] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:19.483 [2024-11-29 13:02:22.160186] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:19.483 [2024-11-29 13:02:22.160189] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:19.483 [2024-11-29 13:02:22.160193] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:19.483 [2024-11-29 13:02:22.160196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:19.483 [2024-11-29 13:02:22.160201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:19.483 [2024-11-29 13:02:22.160209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:19.745 [2024-11-29 13:02:22.168164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:19.745 [2024-11-29 13:02:22.168175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.745 [2024-11-29 13:02:22.168181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.745 [2024-11-29 13:02:22.168187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.745 [2024-11-29 13:02:22.168193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.745 [2024-11-29 13:02:22.168197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:19.745 [2024-11-29 13:02:22.168203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:19.745 [2024-11-29 13:02:22.168210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:19.745 [2024-11-29 13:02:22.176164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:19.745 [2024-11-29 13:02:22.176171] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:19.745 [2024-11-29 13:02:22.176174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:19.745 [2024-11-29 13:02:22.176183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.176187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.176193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.184164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:19.746 [2024-11-29 13:02:22.184214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.184219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.184225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:19.746 [2024-11-29 13:02:22.184228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:19.746 [2024-11-29 13:02:22.184230] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.746 [2024-11-29 13:02:22.184235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.192164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:19.746 [2024-11-29 13:02:22.192176] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:19.746 [2024-11-29 13:02:22.192185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.192191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.192196] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:19.746 [2024-11-29 13:02:22.192199] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:19.746 [2024-11-29 13:02:22.192201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.746 [2024-11-29 13:02:22.192206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.200163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:19.746 [2024-11-29 13:02:22.200173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.200179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.200184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:19.746 [2024-11-29 13:02:22.200187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:19.746 [2024-11-29 13:02:22.200189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.746 [2024-11-29 13:02:22.200194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.208164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:19.746 [2024-11-29 13:02:22.208177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.208182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.208187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.208191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.208195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.208198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.208202] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:19.746 [2024-11-29 13:02:22.208205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:19.746 [2024-11-29 13:02:22.208209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:19.746 [2024-11-29 13:02:22.208221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.216163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:19.746 [2024-11-29 13:02:22.216174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.224163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:19.746 [2024-11-29 13:02:22.224172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.232162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:19.746 [2024-11-29 13:02:22.232172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.240162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:19.746 [2024-11-29 13:02:22.240174] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:19.746 [2024-11-29 13:02:22.240177] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:19.746 [2024-11-29 13:02:22.240180] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:19.746 [2024-11-29 13:02:22.240182] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:19.746 [2024-11-29 13:02:22.240184] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:19.746 [2024-11-29 13:02:22.240189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:19.746 [2024-11-29 13:02:22.240194] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:19.746 [2024-11-29 13:02:22.240197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:19.746 [2024-11-29 13:02:22.240200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.746 [2024-11-29 13:02:22.240205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.240210] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:19.746 [2024-11-29 13:02:22.240213] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:19.746 [2024-11-29 13:02:22.240216] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.746 [2024-11-29 13:02:22.240220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.240226] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:19.746 [2024-11-29 13:02:22.240229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:19.746 [2024-11-29 13:02:22.240231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:19.746 [2024-11-29 13:02:22.240235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:19.746 [2024-11-29 13:02:22.248165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:19.746 [2024-11-29 13:02:22.248176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:19.747 [2024-11-29 13:02:22.248184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:19.747 [2024-11-29 13:02:22.248189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:19.747 ===================================================== 00:18:19.747 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:19.747 ===================================================== 00:18:19.747 Controller Capabilities/Features 00:18:19.747 ================================ 00:18:19.747 Vendor ID: 4e58 00:18:19.747 Subsystem Vendor ID: 4e58 00:18:19.747 Serial Number: SPDK2 00:18:19.747 Model Number: SPDK bdev Controller 00:18:19.747 Firmware Version: 25.01 00:18:19.747 Recommended Arb Burst: 6 00:18:19.747 IEEE OUI Identifier: 8d 6b 50 00:18:19.747 Multi-path I/O 00:18:19.747 May have multiple subsystem ports: Yes 00:18:19.747 May have multiple controllers: Yes 00:18:19.747 Associated with SR-IOV VF: No 00:18:19.747 Max Data Transfer Size: 131072 00:18:19.747 Max Number of Namespaces: 32 00:18:19.747 Max Number of I/O Queues: 127 00:18:19.747 NVMe Specification Version (VS): 1.3 00:18:19.747 NVMe Specification Version (Identify): 1.3 00:18:19.747 Maximum Queue Entries: 256 00:18:19.747 Contiguous Queues Required: Yes 00:18:19.747 Arbitration Mechanisms Supported 00:18:19.747 Weighted Round Robin: Not Supported 00:18:19.747 Vendor Specific: Not Supported 00:18:19.747 Reset Timeout: 15000 ms 00:18:19.747 Doorbell Stride: 4 bytes 00:18:19.747 NVM Subsystem Reset: Not Supported 00:18:19.747 Command Sets Supported 00:18:19.747 NVM Command Set: Supported 00:18:19.747 Boot Partition: Not Supported 00:18:19.747 Memory Page Size Minimum: 4096 bytes 00:18:19.747 Memory Page Size Maximum: 4096 bytes 00:18:19.747 Persistent Memory Region: Not Supported 00:18:19.747 Optional Asynchronous Events Supported 00:18:19.747 Namespace Attribute Notices: Supported 00:18:19.747 Firmware Activation Notices: Not Supported 00:18:19.747 ANA Change Notices: Not Supported 00:18:19.747 PLE Aggregate Log Change Notices: Not Supported 00:18:19.747 LBA Status Info Alert Notices: Not Supported 00:18:19.747 EGE Aggregate Log Change Notices: Not Supported 00:18:19.747 Normal NVM Subsystem Shutdown event: Not Supported 00:18:19.747 Zone Descriptor Change Notices: Not Supported 00:18:19.747 Discovery Log Change Notices: Not Supported 00:18:19.747 Controller Attributes 00:18:19.747 128-bit Host Identifier: Supported 00:18:19.747 Non-Operational Permissive Mode: Not Supported 00:18:19.747 NVM Sets: Not Supported 00:18:19.747 Read Recovery Levels: Not Supported 00:18:19.747 Endurance Groups: Not Supported 00:18:19.747 Predictable Latency Mode: Not Supported 00:18:19.747 Traffic Based Keep ALive: Not Supported 00:18:19.747 Namespace Granularity: Not Supported 00:18:19.747 SQ Associations: Not Supported 00:18:19.747 UUID List: Not Supported 00:18:19.747 Multi-Domain Subsystem: Not Supported 00:18:19.747 Fixed Capacity Management: Not Supported 00:18:19.747 Variable Capacity Management: Not Supported 00:18:19.747 Delete Endurance Group: Not Supported 00:18:19.747 Delete NVM Set: Not Supported 00:18:19.747 Extended LBA Formats Supported: Not Supported 00:18:19.747 Flexible Data Placement Supported: Not Supported 00:18:19.747 00:18:19.747 Controller Memory Buffer Support 00:18:19.747 ================================ 00:18:19.747 Supported: No 00:18:19.747 00:18:19.747 Persistent Memory Region Support 00:18:19.747 ================================ 00:18:19.747 Supported: No 00:18:19.747 00:18:19.747 Admin Command Set Attributes 00:18:19.747 ============================ 00:18:19.747 Security Send/Receive: Not Supported 00:18:19.747 Format NVM: Not Supported 00:18:19.747 Firmware Activate/Download: Not Supported 00:18:19.747 Namespace Management: Not Supported 00:18:19.747 Device Self-Test: Not Supported 00:18:19.747 Directives: Not Supported 00:18:19.747 NVMe-MI: Not Supported 00:18:19.747 Virtualization Management: Not Supported 00:18:19.747 Doorbell Buffer Config: Not Supported 00:18:19.747 Get LBA Status Capability: Not Supported 00:18:19.747 Command & Feature Lockdown Capability: Not Supported 00:18:19.747 Abort Command Limit: 4 00:18:19.747 Async Event Request Limit: 4 00:18:19.747 Number of Firmware Slots: N/A 00:18:19.747 Firmware Slot 1 Read-Only: N/A 00:18:19.747 Firmware Activation Without Reset: N/A 00:18:19.747 Multiple Update Detection Support: N/A 00:18:19.747 Firmware Update Granularity: No Information Provided 00:18:19.747 Per-Namespace SMART Log: No 00:18:19.747 Asymmetric Namespace Access Log Page: Not Supported 00:18:19.747 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:19.747 Command Effects Log Page: Supported 00:18:19.747 Get Log Page Extended Data: Supported 00:18:19.747 Telemetry Log Pages: Not Supported 00:18:19.747 Persistent Event Log Pages: Not Supported 00:18:19.747 Supported Log Pages Log Page: May Support 00:18:19.747 Commands Supported & Effects Log Page: Not Supported 00:18:19.747 Feature Identifiers & Effects Log Page:May Support 00:18:19.747 NVMe-MI Commands & Effects Log Page: May Support 00:18:19.747 Data Area 4 for Telemetry Log: Not Supported 00:18:19.747 Error Log Page Entries Supported: 128 00:18:19.747 Keep Alive: Supported 00:18:19.747 Keep Alive Granularity: 10000 ms 00:18:19.747 00:18:19.747 NVM Command Set Attributes 00:18:19.747 ========================== 00:18:19.747 Submission Queue Entry Size 00:18:19.747 Max: 64 00:18:19.747 Min: 64 00:18:19.747 Completion Queue Entry Size 00:18:19.747 Max: 16 00:18:19.747 Min: 16 00:18:19.747 Number of Namespaces: 32 00:18:19.747 Compare Command: Supported 00:18:19.747 Write Uncorrectable Command: Not Supported 00:18:19.747 Dataset Management Command: Supported 00:18:19.747 Write Zeroes Command: Supported 00:18:19.747 Set Features Save Field: Not Supported 00:18:19.747 Reservations: Not Supported 00:18:19.747 Timestamp: Not Supported 00:18:19.747 Copy: Supported 00:18:19.747 Volatile Write Cache: Present 00:18:19.747 Atomic Write Unit (Normal): 1 00:18:19.747 Atomic Write Unit (PFail): 1 00:18:19.747 Atomic Compare & Write Unit: 1 00:18:19.747 Fused Compare & Write: Supported 00:18:19.747 Scatter-Gather List 00:18:19.747 SGL Command Set: Supported (Dword aligned) 00:18:19.747 SGL Keyed: Not Supported 00:18:19.747 SGL Bit Bucket Descriptor: Not Supported 00:18:19.747 SGL Metadata Pointer: Not Supported 00:18:19.747 Oversized SGL: Not Supported 00:18:19.747 SGL Metadata Address: Not Supported 00:18:19.747 SGL Offset: Not Supported 00:18:19.747 Transport SGL Data Block: Not Supported 00:18:19.747 Replay Protected Memory Block: Not Supported 00:18:19.747 00:18:19.747 Firmware Slot Information 00:18:19.747 ========================= 00:18:19.747 Active slot: 1 00:18:19.747 Slot 1 Firmware Revision: 25.01 00:18:19.747 00:18:19.747 00:18:19.747 Commands Supported and Effects 00:18:19.747 ============================== 00:18:19.747 Admin Commands 00:18:19.747 -------------- 00:18:19.747 Get Log Page (02h): Supported 00:18:19.747 Identify (06h): Supported 00:18:19.747 Abort (08h): Supported 00:18:19.747 Set Features (09h): Supported 00:18:19.747 Get Features (0Ah): Supported 00:18:19.747 Asynchronous Event Request (0Ch): Supported 00:18:19.747 Keep Alive (18h): Supported 00:18:19.747 I/O Commands 00:18:19.747 ------------ 00:18:19.747 Flush (00h): Supported LBA-Change 00:18:19.747 Write (01h): Supported LBA-Change 00:18:19.747 Read (02h): Supported 00:18:19.747 Compare (05h): Supported 00:18:19.747 Write Zeroes (08h): Supported LBA-Change 00:18:19.747 Dataset Management (09h): Supported LBA-Change 00:18:19.747 Copy (19h): Supported LBA-Change 00:18:19.747 00:18:19.747 Error Log 00:18:19.747 ========= 00:18:19.747 00:18:19.747 Arbitration 00:18:19.747 =========== 00:18:19.747 Arbitration Burst: 1 00:18:19.747 00:18:19.747 Power Management 00:18:19.747 ================ 00:18:19.747 Number of Power States: 1 00:18:19.747 Current Power State: Power State #0 00:18:19.747 Power State #0: 00:18:19.747 Max Power: 0.00 W 00:18:19.747 Non-Operational State: Operational 00:18:19.747 Entry Latency: Not Reported 00:18:19.747 Exit Latency: Not Reported 00:18:19.747 Relative Read Throughput: 0 00:18:19.747 Relative Read Latency: 0 00:18:19.748 Relative Write Throughput: 0 00:18:19.748 Relative Write Latency: 0 00:18:19.748 Idle Power: Not Reported 00:18:19.748 Active Power: Not Reported 00:18:19.748 Non-Operational Permissive Mode: Not Supported 00:18:19.748 00:18:19.748 Health Information 00:18:19.748 ================== 00:18:19.748 Critical Warnings: 00:18:19.748 Available Spare Space: OK 00:18:19.748 Temperature: OK 00:18:19.748 Device Reliability: OK 00:18:19.748 Read Only: No 00:18:19.748 Volatile Memory Backup: OK 00:18:19.748 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:19.748 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:19.748 Available Spare: 0% 00:18:19.748 Available Sp[2024-11-29 13:02:22.248261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:19.748 [2024-11-29 13:02:22.256165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:19.748 [2024-11-29 13:02:22.256188] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:19.748 [2024-11-29 13:02:22.256195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.748 [2024-11-29 13:02:22.256199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.748 [2024-11-29 13:02:22.256204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.748 [2024-11-29 13:02:22.256208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.748 [2024-11-29 13:02:22.256247] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:19.748 [2024-11-29 13:02:22.256255] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:19.748 [2024-11-29 13:02:22.257250] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:19.748 [2024-11-29 13:02:22.257286] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:19.748 [2024-11-29 13:02:22.257291] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:19.748 [2024-11-29 13:02:22.258253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:19.748 [2024-11-29 13:02:22.258263] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:19.748 [2024-11-29 13:02:22.258307] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:19.748 [2024-11-29 13:02:22.261163] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:19.748 are Threshold: 0% 00:18:19.748 Life Percentage Used: 0% 00:18:19.748 Data Units Read: 0 00:18:19.748 Data Units Written: 0 00:18:19.748 Host Read Commands: 0 00:18:19.748 Host Write Commands: 0 00:18:19.748 Controller Busy Time: 0 minutes 00:18:19.748 Power Cycles: 0 00:18:19.748 Power On Hours: 0 hours 00:18:19.748 Unsafe Shutdowns: 0 00:18:19.748 Unrecoverable Media Errors: 0 00:18:19.748 Lifetime Error Log Entries: 0 00:18:19.748 Warning Temperature Time: 0 minutes 00:18:19.748 Critical Temperature Time: 0 minutes 00:18:19.748 00:18:19.748 Number of Queues 00:18:19.748 ================ 00:18:19.748 Number of I/O Submission Queues: 127 00:18:19.748 Number of I/O Completion Queues: 127 00:18:19.748 00:18:19.748 Active Namespaces 00:18:19.748 ================= 00:18:19.748 Namespace ID:1 00:18:19.748 Error Recovery Timeout: Unlimited 00:18:19.748 Command Set Identifier: NVM (00h) 00:18:19.748 Deallocate: Supported 00:18:19.748 Deallocated/Unwritten Error: Not Supported 00:18:19.748 Deallocated Read Value: Unknown 00:18:19.748 Deallocate in Write Zeroes: Not Supported 00:18:19.748 Deallocated Guard Field: 0xFFFF 00:18:19.748 Flush: Supported 00:18:19.748 Reservation: Supported 00:18:19.748 Namespace Sharing Capabilities: Multiple Controllers 00:18:19.748 Size (in LBAs): 131072 (0GiB) 00:18:19.748 Capacity (in LBAs): 131072 (0GiB) 00:18:19.748 Utilization (in LBAs): 131072 (0GiB) 00:18:19.748 NGUID: 9E277C04CA064B40BB97E1E0E7AE0794 00:18:19.748 UUID: 9e277c04-ca06-4b40-bb97-e1e0e7ae0794 00:18:19.748 Thin Provisioning: Not Supported 00:18:19.748 Per-NS Atomic Units: Yes 00:18:19.748 Atomic Boundary Size (Normal): 0 00:18:19.748 Atomic Boundary Size (PFail): 0 00:18:19.748 Atomic Boundary Offset: 0 00:18:19.748 Maximum Single Source Range Length: 65535 00:18:19.748 Maximum Copy Length: 65535 00:18:19.748 Maximum Source Range Count: 1 00:18:19.748 NGUID/EUI64 Never Reused: No 00:18:19.748 Namespace Write Protected: No 00:18:19.748 Number of LBA Formats: 1 00:18:19.748 Current LBA Format: LBA Format #00 00:18:19.748 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:19.748 00:18:19.748 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:20.009 [2024-11-29 13:02:22.448528] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:25.297 Initializing NVMe Controllers 00:18:25.297 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:25.297 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:25.297 Initialization complete. Launching workers. 00:18:25.297 ======================================================== 00:18:25.297 Latency(us) 00:18:25.297 Device Information : IOPS MiB/s Average min max 00:18:25.297 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40096.20 156.63 3194.70 862.78 8732.72 00:18:25.297 ======================================================== 00:18:25.297 Total : 40096.20 156.63 3194.70 862.78 8732.72 00:18:25.297 00:18:25.297 [2024-11-29 13:02:27.558359] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:25.297 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:25.297 [2024-11-29 13:02:27.749967] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.611 Initializing NVMe Controllers 00:18:30.611 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:30.611 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:30.612 Initialization complete. Launching workers. 00:18:30.612 ======================================================== 00:18:30.612 Latency(us) 00:18:30.612 Device Information : IOPS MiB/s Average min max 00:18:30.612 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39942.08 156.02 3204.31 853.31 6806.08 00:18:30.612 ======================================================== 00:18:30.612 Total : 39942.08 156.02 3204.31 853.31 6806.08 00:18:30.612 00:18:30.612 [2024-11-29 13:02:32.768442] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.612 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:30.612 [2024-11-29 13:02:32.967623] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:35.904 [2024-11-29 13:02:38.116249] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:35.904 Initializing NVMe Controllers 00:18:35.904 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:35.904 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:35.904 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:35.904 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:35.904 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:35.904 Initialization complete. Launching workers. 00:18:35.904 Starting thread on core 2 00:18:35.904 Starting thread on core 3 00:18:35.904 Starting thread on core 1 00:18:35.904 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:35.904 [2024-11-29 13:02:38.363501] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:39.210 [2024-11-29 13:02:41.427343] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:39.210 Initializing NVMe Controllers 00:18:39.210 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.210 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.210 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:39.210 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:39.210 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:39.210 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:39.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:39.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:39.210 Initialization complete. Launching workers. 00:18:39.210 Starting thread on core 1 with urgent priority queue 00:18:39.210 Starting thread on core 2 with urgent priority queue 00:18:39.210 Starting thread on core 3 with urgent priority queue 00:18:39.210 Starting thread on core 0 with urgent priority queue 00:18:39.210 SPDK bdev Controller (SPDK2 ) core 0: 16554.00 IO/s 6.04 secs/100000 ios 00:18:39.210 SPDK bdev Controller (SPDK2 ) core 1: 7495.33 IO/s 13.34 secs/100000 ios 00:18:39.210 SPDK bdev Controller (SPDK2 ) core 2: 8530.33 IO/s 11.72 secs/100000 ios 00:18:39.210 SPDK bdev Controller (SPDK2 ) core 3: 16862.67 IO/s 5.93 secs/100000 ios 00:18:39.210 ======================================================== 00:18:39.210 00:18:39.210 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:39.210 [2024-11-29 13:02:41.666530] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:39.210 Initializing NVMe Controllers 00:18:39.210 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.210 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:39.210 Namespace ID: 1 size: 0GB 00:18:39.210 Initialization complete. 00:18:39.210 INFO: using host memory buffer for IO 00:18:39.210 Hello world! 00:18:39.210 [2024-11-29 13:02:41.675592] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:39.210 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:39.552 [2024-11-29 13:02:41.918802] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:40.527 Initializing NVMe Controllers 00:18:40.527 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:40.527 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:40.527 Initialization complete. Launching workers. 00:18:40.527 submit (in ns) avg, min, max = 5938.2, 2837.5, 4001258.3 00:18:40.527 complete (in ns) avg, min, max = 16446.8, 1641.7, 4000591.7 00:18:40.527 00:18:40.527 Submit histogram 00:18:40.527 ================ 00:18:40.527 Range in us Cumulative Count 00:18:40.527 2.827 - 2.840: 0.0098% ( 2) 00:18:40.527 2.840 - 2.853: 0.7968% ( 160) 00:18:40.527 2.853 - 2.867: 2.2675% ( 299) 00:18:40.527 2.867 - 2.880: 5.0907% ( 574) 00:18:40.527 2.880 - 2.893: 9.7978% ( 957) 00:18:40.527 2.893 - 2.907: 15.8379% ( 1228) 00:18:40.527 2.907 - 2.920: 20.5204% ( 952) 00:18:40.527 2.920 - 2.933: 25.6898% ( 1051) 00:18:40.527 2.933 - 2.947: 31.0216% ( 1084) 00:18:40.527 2.947 - 2.960: 35.6057% ( 932) 00:18:40.527 2.960 - 2.973: 40.7752% ( 1051) 00:18:40.527 2.973 - 2.987: 45.5757% ( 976) 00:18:40.527 2.987 - 3.000: 52.4273% ( 1393) 00:18:40.527 3.000 - 3.013: 61.4530% ( 1835) 00:18:40.527 3.013 - 3.027: 71.4771% ( 2038) 00:18:40.527 3.027 - 3.040: 80.1387% ( 1761) 00:18:40.527 3.040 - 3.053: 86.5378% ( 1301) 00:18:40.527 3.053 - 3.067: 91.6040% ( 1030) 00:18:40.527 3.067 - 3.080: 95.3421% ( 760) 00:18:40.527 3.080 - 3.093: 97.6046% ( 460) 00:18:40.527 3.093 - 3.107: 98.5785% ( 198) 00:18:40.527 3.107 - 3.120: 99.1245% ( 111) 00:18:40.527 3.120 - 3.133: 99.3016% ( 36) 00:18:40.527 3.133 - 3.147: 99.4098% ( 22) 00:18:40.527 3.147 - 3.160: 99.4934% ( 17) 00:18:40.527 3.160 - 3.173: 99.5377% ( 9) 00:18:40.527 3.173 - 3.187: 99.5573% ( 4) 00:18:40.527 3.187 - 3.200: 99.5819% ( 5) 00:18:40.527 3.253 - 3.267: 99.5868% ( 1) 00:18:40.527 3.347 - 3.360: 99.5918% ( 1) 00:18:40.527 3.360 - 3.373: 99.5967% ( 1) 00:18:40.527 3.547 - 3.573: 99.6016% ( 1) 00:18:40.527 3.573 - 3.600: 99.6065% ( 1) 00:18:40.527 3.653 - 3.680: 99.6114% ( 1) 00:18:40.527 3.733 - 3.760: 99.6163% ( 1) 00:18:40.527 3.787 - 3.813: 99.6213% ( 1) 00:18:40.527 3.893 - 3.920: 99.6262% ( 1) 00:18:40.527 3.973 - 4.000: 99.6360% ( 2) 00:18:40.527 4.453 - 4.480: 99.6409% ( 1) 00:18:40.527 4.507 - 4.533: 99.6508% ( 2) 00:18:40.527 4.587 - 4.613: 99.6557% ( 1) 00:18:40.527 4.667 - 4.693: 99.6606% ( 1) 00:18:40.527 4.693 - 4.720: 99.6655% ( 1) 00:18:40.527 4.747 - 4.773: 99.6754% ( 2) 00:18:40.527 4.773 - 4.800: 99.6852% ( 2) 00:18:40.527 4.800 - 4.827: 99.6901% ( 1) 00:18:40.527 4.827 - 4.853: 99.6950% ( 1) 00:18:40.527 4.880 - 4.907: 99.7000% ( 1) 00:18:40.527 4.907 - 4.933: 99.7049% ( 1) 00:18:40.527 4.933 - 4.960: 99.7098% ( 1) 00:18:40.527 4.960 - 4.987: 99.7147% ( 1) 00:18:40.527 5.013 - 5.040: 99.7246% ( 2) 00:18:40.527 5.040 - 5.067: 99.7344% ( 2) 00:18:40.527 5.093 - 5.120: 99.7393% ( 1) 00:18:40.527 5.120 - 5.147: 99.7590% ( 4) 00:18:40.527 5.147 - 5.173: 99.7639% ( 1) 00:18:40.527 5.173 - 5.200: 99.7737% ( 2) 00:18:40.527 5.200 - 5.227: 99.7787% ( 1) 00:18:40.527 5.280 - 5.307: 99.7836% ( 1) 00:18:40.527 5.307 - 5.333: 99.7885% ( 1) 00:18:40.527 5.333 - 5.360: 99.7934% ( 1) 00:18:40.527 5.360 - 5.387: 99.7983% ( 1) 00:18:40.527 5.440 - 5.467: 99.8033% ( 1) 00:18:40.527 5.493 - 5.520: 99.8082% ( 1) 00:18:40.527 5.627 - 5.653: 99.8131% ( 1) 00:18:40.527 5.760 - 5.787: 99.8229% ( 2) 00:18:40.527 5.813 - 5.840: 99.8328% ( 2) 00:18:40.527 5.840 - 5.867: 99.8377% ( 1) 00:18:40.527 5.893 - 5.920: 99.8426% ( 1) 00:18:40.527 5.947 - 5.973: 99.8475% ( 1) 00:18:40.527 5.973 - 6.000: 99.8574% ( 2) 00:18:40.527 6.027 - 6.053: 99.8623% ( 1) 00:18:40.527 6.080 - 6.107: 99.8672% ( 1) 00:18:40.527 6.160 - 6.187: 99.8721% ( 1) 00:18:40.527 6.240 - 6.267: 99.8770% ( 1) 00:18:40.527 6.320 - 6.347: 99.8820% ( 1) 00:18:40.527 6.347 - 6.373: 99.8869% ( 1) 00:18:40.527 6.400 - 6.427: 99.8918% ( 1) 00:18:40.527 6.427 - 6.453: 99.8967% ( 1) 00:18:40.527 [2024-11-29 13:02:43.011683] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:40.527 6.613 - 6.640: 99.9016% ( 1) 00:18:40.527 6.827 - 6.880: 99.9065% ( 1) 00:18:40.527 6.880 - 6.933: 99.9115% ( 1) 00:18:40.527 6.987 - 7.040: 99.9164% ( 1) 00:18:40.527 7.360 - 7.413: 99.9213% ( 1) 00:18:40.527 8.427 - 8.480: 99.9262% ( 1) 00:18:40.527 3986.773 - 4014.080: 100.0000% ( 15) 00:18:40.527 00:18:40.527 Complete histogram 00:18:40.527 ================== 00:18:40.527 Range in us Cumulative Count 00:18:40.527 1.640 - 1.647: 0.6788% ( 138) 00:18:40.527 1.647 - 1.653: 1.2296% ( 112) 00:18:40.527 1.653 - 1.660: 1.3133% ( 17) 00:18:40.527 1.660 - 1.667: 1.4166% ( 21) 00:18:40.527 1.667 - 1.673: 1.5395% ( 25) 00:18:40.527 1.673 - 1.680: 1.6035% ( 13) 00:18:40.527 1.680 - 1.687: 1.6182% ( 3) 00:18:40.527 1.687 - 1.693: 1.6330% ( 3) 00:18:40.527 1.693 - 1.700: 1.6379% ( 1) 00:18:40.527 1.700 - 1.707: 6.2073% ( 929) 00:18:40.527 1.707 - 1.720: 62.1711% ( 11378) 00:18:40.527 1.720 - 1.733: 79.4501% ( 3513) 00:18:40.527 1.733 - 1.747: 83.4145% ( 806) 00:18:40.527 1.747 - 1.760: 84.4031% ( 201) 00:18:40.527 1.760 - 1.773: 87.9593% ( 723) 00:18:40.527 1.773 - 1.787: 93.2418% ( 1074) 00:18:40.527 1.787 - 1.800: 97.2899% ( 823) 00:18:40.527 1.800 - 1.813: 98.8982% ( 327) 00:18:40.527 1.813 - 1.827: 99.3557% ( 93) 00:18:40.527 1.827 - 1.840: 99.4983% ( 29) 00:18:40.527 1.840 - 1.853: 99.5229% ( 5) 00:18:40.527 3.280 - 3.293: 99.5278% ( 1) 00:18:40.527 3.333 - 3.347: 99.5327% ( 1) 00:18:40.527 3.373 - 3.387: 99.5377% ( 1) 00:18:40.527 3.400 - 3.413: 99.5426% ( 1) 00:18:40.527 3.467 - 3.493: 99.5475% ( 1) 00:18:40.527 3.493 - 3.520: 99.5524% ( 1) 00:18:40.527 3.573 - 3.600: 99.5573% ( 1) 00:18:40.527 3.760 - 3.787: 99.5622% ( 1) 00:18:40.527 3.813 - 3.840: 99.5672% ( 1) 00:18:40.527 3.840 - 3.867: 99.5721% ( 1) 00:18:40.527 3.893 - 3.920: 99.5770% ( 1) 00:18:40.527 4.133 - 4.160: 99.5819% ( 1) 00:18:40.527 4.187 - 4.213: 99.5868% ( 1) 00:18:40.527 4.213 - 4.240: 99.5918% ( 1) 00:18:40.527 4.293 - 4.320: 99.5967% ( 1) 00:18:40.527 4.347 - 4.373: 99.6016% ( 1) 00:18:40.527 4.427 - 4.453: 99.6065% ( 1) 00:18:40.527 4.533 - 4.560: 99.6114% ( 1) 00:18:40.527 4.560 - 4.587: 99.6163% ( 1) 00:18:40.527 4.640 - 4.667: 99.6213% ( 1) 00:18:40.527 4.987 - 5.013: 99.6262% ( 1) 00:18:40.527 5.307 - 5.333: 99.6311% ( 1) 00:18:40.527 3713.707 - 3741.013: 99.6360% ( 1) 00:18:40.527 3986.773 - 4014.080: 100.0000% ( 74) 00:18:40.527 00:18:40.527 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:40.527 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:40.527 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:40.527 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:40.527 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:40.527 [ 00:18:40.527 { 00:18:40.527 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:40.527 "subtype": "Discovery", 00:18:40.527 "listen_addresses": [], 00:18:40.528 "allow_any_host": true, 00:18:40.528 "hosts": [] 00:18:40.528 }, 00:18:40.528 { 00:18:40.528 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:40.528 "subtype": "NVMe", 00:18:40.528 "listen_addresses": [ 00:18:40.528 { 00:18:40.528 "trtype": "VFIOUSER", 00:18:40.528 "adrfam": "IPv4", 00:18:40.528 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:40.528 "trsvcid": "0" 00:18:40.528 } 00:18:40.528 ], 00:18:40.528 "allow_any_host": true, 00:18:40.528 "hosts": [], 00:18:40.528 "serial_number": "SPDK1", 00:18:40.528 "model_number": "SPDK bdev Controller", 00:18:40.528 "max_namespaces": 32, 00:18:40.528 "min_cntlid": 1, 00:18:40.528 "max_cntlid": 65519, 00:18:40.528 "namespaces": [ 00:18:40.528 { 00:18:40.528 "nsid": 1, 00:18:40.528 "bdev_name": "Malloc1", 00:18:40.528 "name": "Malloc1", 00:18:40.528 "nguid": "DB4BE8EE561C4F17B702D9044A5FDE2F", 00:18:40.528 "uuid": "db4be8ee-561c-4f17-b702-d9044a5fde2f" 00:18:40.528 }, 00:18:40.528 { 00:18:40.528 "nsid": 2, 00:18:40.528 "bdev_name": "Malloc3", 00:18:40.528 "name": "Malloc3", 00:18:40.528 "nguid": "55BCE824D6AF4790A4664D75C932F715", 00:18:40.528 "uuid": "55bce824-d6af-4790-a466-4d75c932f715" 00:18:40.528 } 00:18:40.528 ] 00:18:40.528 }, 00:18:40.528 { 00:18:40.528 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:40.528 "subtype": "NVMe", 00:18:40.528 "listen_addresses": [ 00:18:40.528 { 00:18:40.528 "trtype": "VFIOUSER", 00:18:40.528 "adrfam": "IPv4", 00:18:40.528 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:40.528 "trsvcid": "0" 00:18:40.528 } 00:18:40.528 ], 00:18:40.528 "allow_any_host": true, 00:18:40.528 "hosts": [], 00:18:40.528 "serial_number": "SPDK2", 00:18:40.528 "model_number": "SPDK bdev Controller", 00:18:40.528 "max_namespaces": 32, 00:18:40.528 "min_cntlid": 1, 00:18:40.528 "max_cntlid": 65519, 00:18:40.528 "namespaces": [ 00:18:40.528 { 00:18:40.528 "nsid": 1, 00:18:40.528 "bdev_name": "Malloc2", 00:18:40.528 "name": "Malloc2", 00:18:40.528 "nguid": "9E277C04CA064B40BB97E1E0E7AE0794", 00:18:40.528 "uuid": "9e277c04-ca06-4b40-bb97-e1e0e7ae0794" 00:18:40.528 } 00:18:40.528 ] 00:18:40.528 } 00:18:40.528 ] 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=883609 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:40.789 [2024-11-29 13:02:43.389575] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:40.789 Malloc4 00:18:40.789 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:41.051 [2024-11-29 13:02:43.583886] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:41.051 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:41.051 Asynchronous Event Request test 00:18:41.051 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:41.051 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:41.051 Registering asynchronous event callbacks... 00:18:41.051 Starting namespace attribute notice tests for all controllers... 00:18:41.051 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:41.051 aer_cb - Changed Namespace 00:18:41.051 Cleaning up... 00:18:41.312 [ 00:18:41.312 { 00:18:41.312 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:41.312 "subtype": "Discovery", 00:18:41.312 "listen_addresses": [], 00:18:41.312 "allow_any_host": true, 00:18:41.312 "hosts": [] 00:18:41.312 }, 00:18:41.312 { 00:18:41.312 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:41.312 "subtype": "NVMe", 00:18:41.312 "listen_addresses": [ 00:18:41.312 { 00:18:41.312 "trtype": "VFIOUSER", 00:18:41.312 "adrfam": "IPv4", 00:18:41.312 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:41.312 "trsvcid": "0" 00:18:41.312 } 00:18:41.312 ], 00:18:41.312 "allow_any_host": true, 00:18:41.312 "hosts": [], 00:18:41.312 "serial_number": "SPDK1", 00:18:41.312 "model_number": "SPDK bdev Controller", 00:18:41.312 "max_namespaces": 32, 00:18:41.312 "min_cntlid": 1, 00:18:41.312 "max_cntlid": 65519, 00:18:41.312 "namespaces": [ 00:18:41.312 { 00:18:41.312 "nsid": 1, 00:18:41.312 "bdev_name": "Malloc1", 00:18:41.312 "name": "Malloc1", 00:18:41.312 "nguid": "DB4BE8EE561C4F17B702D9044A5FDE2F", 00:18:41.312 "uuid": "db4be8ee-561c-4f17-b702-d9044a5fde2f" 00:18:41.312 }, 00:18:41.312 { 00:18:41.312 "nsid": 2, 00:18:41.312 "bdev_name": "Malloc3", 00:18:41.312 "name": "Malloc3", 00:18:41.312 "nguid": "55BCE824D6AF4790A4664D75C932F715", 00:18:41.312 "uuid": "55bce824-d6af-4790-a466-4d75c932f715" 00:18:41.312 } 00:18:41.312 ] 00:18:41.312 }, 00:18:41.312 { 00:18:41.312 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:41.313 "subtype": "NVMe", 00:18:41.313 "listen_addresses": [ 00:18:41.313 { 00:18:41.313 "trtype": "VFIOUSER", 00:18:41.313 "adrfam": "IPv4", 00:18:41.313 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:41.313 "trsvcid": "0" 00:18:41.313 } 00:18:41.313 ], 00:18:41.313 "allow_any_host": true, 00:18:41.313 "hosts": [], 00:18:41.313 "serial_number": "SPDK2", 00:18:41.313 "model_number": "SPDK bdev Controller", 00:18:41.313 "max_namespaces": 32, 00:18:41.313 "min_cntlid": 1, 00:18:41.313 "max_cntlid": 65519, 00:18:41.313 "namespaces": [ 00:18:41.313 { 00:18:41.313 "nsid": 1, 00:18:41.313 "bdev_name": "Malloc2", 00:18:41.313 "name": "Malloc2", 00:18:41.313 "nguid": "9E277C04CA064B40BB97E1E0E7AE0794", 00:18:41.313 "uuid": "9e277c04-ca06-4b40-bb97-e1e0e7ae0794" 00:18:41.313 }, 00:18:41.313 { 00:18:41.313 "nsid": 2, 00:18:41.313 "bdev_name": "Malloc4", 00:18:41.313 "name": "Malloc4", 00:18:41.313 "nguid": "955E459B099E45CB9F3E9B8860156EBB", 00:18:41.313 "uuid": "955e459b-099e-45cb-9f3e-9b8860156ebb" 00:18:41.313 } 00:18:41.313 ] 00:18:41.313 } 00:18:41.313 ] 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 883609 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 874644 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 874644 ']' 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 874644 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 874644 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 874644' 00:18:41.313 killing process with pid 874644 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 874644 00:18:41.313 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 874644 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=883756 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 883756' 00:18:41.574 Process pid: 883756 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:41.574 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 883756 00:18:41.575 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 883756 ']' 00:18:41.575 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.575 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.575 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.575 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.575 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:41.575 [2024-11-29 13:02:44.061489] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:41.575 [2024-11-29 13:02:44.062418] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:18:41.575 [2024-11-29 13:02:44.062459] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.575 [2024-11-29 13:02:44.147124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.575 [2024-11-29 13:02:44.177612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.575 [2024-11-29 13:02:44.177647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.575 [2024-11-29 13:02:44.177653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.575 [2024-11-29 13:02:44.177658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.575 [2024-11-29 13:02:44.177662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.575 [2024-11-29 13:02:44.179081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.575 [2024-11-29 13:02:44.179219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.575 [2024-11-29 13:02:44.179555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.575 [2024-11-29 13:02:44.179555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.575 [2024-11-29 13:02:44.231249] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:41.575 [2024-11-29 13:02:44.232175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:41.575 [2024-11-29 13:02:44.232983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:41.575 [2024-11-29 13:02:44.233505] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:41.575 [2024-11-29 13:02:44.233532] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:42.529 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.529 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:42.529 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:43.468 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:43.468 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:43.468 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:43.468 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:43.468 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:43.468 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:43.729 Malloc1 00:18:43.729 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:43.990 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:44.251 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:44.251 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:44.251 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:44.251 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:44.512 Malloc2 00:18:44.512 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:44.773 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:44.773 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 883756 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 883756 ']' 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 883756 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883756 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883756' 00:18:45.034 killing process with pid 883756 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 883756 00:18:45.034 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 883756 00:18:45.295 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:45.295 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:45.295 00:18:45.295 real 0m51.044s 00:18:45.295 user 3m15.534s 00:18:45.295 sys 0m2.736s 00:18:45.295 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.295 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:45.295 ************************************ 00:18:45.295 END TEST nvmf_vfio_user 00:18:45.295 ************************************ 00:18:45.296 13:02:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:45.296 13:02:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:45.296 13:02:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.296 13:02:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.296 ************************************ 00:18:45.296 START TEST nvmf_vfio_user_nvme_compliance 00:18:45.296 ************************************ 00:18:45.296 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:45.558 * Looking for test storage... 00:18:45.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:45.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.558 --rc genhtml_branch_coverage=1 00:18:45.558 --rc genhtml_function_coverage=1 00:18:45.558 --rc genhtml_legend=1 00:18:45.558 --rc geninfo_all_blocks=1 00:18:45.558 --rc geninfo_unexecuted_blocks=1 00:18:45.558 00:18:45.558 ' 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:45.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.558 --rc genhtml_branch_coverage=1 00:18:45.558 --rc genhtml_function_coverage=1 00:18:45.558 --rc genhtml_legend=1 00:18:45.558 --rc geninfo_all_blocks=1 00:18:45.558 --rc geninfo_unexecuted_blocks=1 00:18:45.558 00:18:45.558 ' 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:45.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.558 --rc genhtml_branch_coverage=1 00:18:45.558 --rc genhtml_function_coverage=1 00:18:45.558 --rc genhtml_legend=1 00:18:45.558 --rc geninfo_all_blocks=1 00:18:45.558 --rc geninfo_unexecuted_blocks=1 00:18:45.558 00:18:45.558 ' 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:45.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.558 --rc genhtml_branch_coverage=1 00:18:45.558 --rc genhtml_function_coverage=1 00:18:45.558 --rc genhtml_legend=1 00:18:45.558 --rc geninfo_all_blocks=1 00:18:45.558 --rc geninfo_unexecuted_blocks=1 00:18:45.558 00:18:45.558 ' 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.558 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=884563 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 884563' 00:18:45.559 Process pid: 884563 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 884563 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 884563 ']' 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.559 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:45.559 [2024-11-29 13:02:48.199821] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:18:45.559 [2024-11-29 13:02:48.199879] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.819 [2024-11-29 13:02:48.284106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.819 [2024-11-29 13:02:48.313948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.819 [2024-11-29 13:02:48.313978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.819 [2024-11-29 13:02:48.313985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.819 [2024-11-29 13:02:48.313989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.819 [2024-11-29 13:02:48.313994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.819 [2024-11-29 13:02:48.315094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.819 [2024-11-29 13:02:48.315219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.819 [2024-11-29 13:02:48.315407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.391 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.391 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:46.391 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:47.333 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:47.333 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:47.333 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:47.333 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.333 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.617 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.618 malloc0 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.618 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:47.618 00:18:47.618 00:18:47.618 CUnit - A unit testing framework for C - Version 2.1-3 00:18:47.618 http://cunit.sourceforge.net/ 00:18:47.618 00:18:47.618 00:18:47.618 Suite: nvme_compliance 00:18:47.618 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-29 13:02:50.252636] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:47.618 [2024-11-29 13:02:50.253947] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:47.618 [2024-11-29 13:02:50.253958] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:47.618 [2024-11-29 13:02:50.253963] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:47.618 [2024-11-29 13:02:50.255653] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:47.618 passed 00:18:47.878 Test: admin_identify_ctrlr_verify_fused ...[2024-11-29 13:02:50.333166] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:47.878 [2024-11-29 13:02:50.336188] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:47.878 passed 00:18:47.878 Test: admin_identify_ns ...[2024-11-29 13:02:50.413547] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:47.878 [2024-11-29 13:02:50.477169] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:47.878 [2024-11-29 13:02:50.482167] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:47.878 [2024-11-29 13:02:50.501243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:47.878 passed 00:18:48.139 Test: admin_get_features_mandatory_features ...[2024-11-29 13:02:50.577292] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.139 [2024-11-29 13:02:50.580317] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.139 passed 00:18:48.139 Test: admin_get_features_optional_features ...[2024-11-29 13:02:50.657792] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.139 [2024-11-29 13:02:50.660817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.139 passed 00:18:48.139 Test: admin_set_features_number_of_queues ...[2024-11-29 13:02:50.735566] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.399 [2024-11-29 13:02:50.841245] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.399 passed 00:18:48.399 Test: admin_get_log_page_mandatory_logs ...[2024-11-29 13:02:50.914459] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.399 [2024-11-29 13:02:50.917477] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.399 passed 00:18:48.399 Test: admin_get_log_page_with_lpo ...[2024-11-29 13:02:50.991228] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.399 [2024-11-29 13:02:51.061165] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:48.399 [2024-11-29 13:02:51.074215] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.659 passed 00:18:48.659 Test: fabric_property_get ...[2024-11-29 13:02:51.147452] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.659 [2024-11-29 13:02:51.148651] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:48.659 [2024-11-29 13:02:51.150471] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.659 passed 00:18:48.659 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-29 13:02:51.226931] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.659 [2024-11-29 13:02:51.228128] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:48.659 [2024-11-29 13:02:51.229944] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.659 passed 00:18:48.659 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-29 13:02:51.305694] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.919 [2024-11-29 13:02:51.390165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:48.919 [2024-11-29 13:02:51.406167] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:48.919 [2024-11-29 13:02:51.411243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.919 passed 00:18:48.919 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-29 13:02:51.483452] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:48.919 [2024-11-29 13:02:51.484649] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:48.919 [2024-11-29 13:02:51.486470] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:48.919 passed 00:18:48.919 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-29 13:02:51.562214] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.179 [2024-11-29 13:02:51.639166] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:49.179 [2024-11-29 13:02:51.663165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:49.179 [2024-11-29 13:02:51.668229] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.179 passed 00:18:49.179 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-29 13:02:51.744293] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.179 [2024-11-29 13:02:51.745491] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:49.179 [2024-11-29 13:02:51.745510] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:49.179 [2024-11-29 13:02:51.747305] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.179 passed 00:18:49.179 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-29 13:02:51.822092] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.438 [2024-11-29 13:02:51.915168] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:49.438 [2024-11-29 13:02:51.923171] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:49.438 [2024-11-29 13:02:51.931165] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:49.438 [2024-11-29 13:02:51.939163] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:49.438 [2024-11-29 13:02:51.968232] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.438 passed 00:18:49.438 Test: admin_create_io_sq_verify_pc ...[2024-11-29 13:02:52.042427] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:49.438 [2024-11-29 13:02:52.059171] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:49.438 [2024-11-29 13:02:52.076564] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:49.438 passed 00:18:49.698 Test: admin_create_io_qp_max_qps ...[2024-11-29 13:02:52.152060] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:50.637 [2024-11-29 13:02:53.254168] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:51.207 [2024-11-29 13:02:53.650214] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:51.207 passed 00:18:51.207 Test: admin_create_io_sq_shared_cq ...[2024-11-29 13:02:53.723998] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:51.207 [2024-11-29 13:02:53.856169] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:51.467 [2024-11-29 13:02:53.893220] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:51.467 passed 00:18:51.467 00:18:51.467 Run Summary: Type Total Ran Passed Failed Inactive 00:18:51.467 suites 1 1 n/a 0 0 00:18:51.467 tests 18 18 18 0 0 00:18:51.467 asserts 360 360 360 0 n/a 00:18:51.467 00:18:51.467 Elapsed time = 1.495 seconds 00:18:51.467 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 884563 00:18:51.467 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 884563 ']' 00:18:51.467 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 884563 00:18:51.467 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:51.467 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.467 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884563 00:18:51.467 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.467 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.467 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884563' 00:18:51.467 killing process with pid 884563 00:18:51.467 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 884563 00:18:51.467 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 884563 00:18:51.467 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:51.467 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:51.467 00:18:51.467 real 0m6.209s 00:18:51.467 user 0m17.628s 00:18:51.467 sys 0m0.530s 00:18:51.467 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.467 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:51.467 ************************************ 00:18:51.467 END TEST nvmf_vfio_user_nvme_compliance 00:18:51.467 ************************************ 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 ************************************ 00:18:51.728 START TEST nvmf_vfio_user_fuzz 00:18:51.728 ************************************ 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:51.728 * Looking for test storage... 00:18:51.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:51.728 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:51.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.729 --rc genhtml_branch_coverage=1 00:18:51.729 --rc genhtml_function_coverage=1 00:18:51.729 --rc genhtml_legend=1 00:18:51.729 --rc geninfo_all_blocks=1 00:18:51.729 --rc geninfo_unexecuted_blocks=1 00:18:51.729 00:18:51.729 ' 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:51.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.729 --rc genhtml_branch_coverage=1 00:18:51.729 --rc genhtml_function_coverage=1 00:18:51.729 --rc genhtml_legend=1 00:18:51.729 --rc geninfo_all_blocks=1 00:18:51.729 --rc geninfo_unexecuted_blocks=1 00:18:51.729 00:18:51.729 ' 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:51.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.729 --rc genhtml_branch_coverage=1 00:18:51.729 --rc genhtml_function_coverage=1 00:18:51.729 --rc genhtml_legend=1 00:18:51.729 --rc geninfo_all_blocks=1 00:18:51.729 --rc geninfo_unexecuted_blocks=1 00:18:51.729 00:18:51.729 ' 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:51.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.729 --rc genhtml_branch_coverage=1 00:18:51.729 --rc genhtml_function_coverage=1 00:18:51.729 --rc genhtml_legend=1 00:18:51.729 --rc geninfo_all_blocks=1 00:18:51.729 --rc geninfo_unexecuted_blocks=1 00:18:51.729 00:18:51.729 ' 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.729 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:51.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=885914 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 885914' 00:18:51.990 Process pid: 885914 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 885914 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 885914 ']' 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.990 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:52.932 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.932 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:52.932 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.873 malloc0 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.873 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:53.874 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.874 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:53.874 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.874 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:53.874 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:25.998 Fuzzing completed. Shutting down the fuzz application 00:19:25.998 00:19:25.998 Dumping successful admin opcodes: 00:19:25.998 9, 10, 00:19:25.998 Dumping successful io opcodes: 00:19:25.998 0, 00:19:25.998 NS: 0x20000081ef00 I/O qp, Total commands completed: 1444087, total successful commands: 5648, random_seed: 157657600 00:19:25.998 NS: 0x20000081ef00 admin qp, Total commands completed: 358912, total successful commands: 94, random_seed: 3414030208 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 885914 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 885914 ']' 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 885914 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885914 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885914' 00:19:25.998 killing process with pid 885914 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 885914 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 885914 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:25.998 00:19:25.998 real 0m32.800s 00:19:25.998 user 0m37.939s 00:19:25.998 sys 0m24.858s 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.998 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:25.998 ************************************ 00:19:25.998 END TEST nvmf_vfio_user_fuzz 00:19:25.998 ************************************ 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:25.998 ************************************ 00:19:25.998 START TEST nvmf_auth_target 00:19:25.998 ************************************ 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:25.998 * Looking for test storage... 00:19:25.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:25.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.998 --rc genhtml_branch_coverage=1 00:19:25.998 --rc genhtml_function_coverage=1 00:19:25.998 --rc genhtml_legend=1 00:19:25.998 --rc geninfo_all_blocks=1 00:19:25.998 --rc geninfo_unexecuted_blocks=1 00:19:25.998 00:19:25.998 ' 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:25.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.998 --rc genhtml_branch_coverage=1 00:19:25.998 --rc genhtml_function_coverage=1 00:19:25.998 --rc genhtml_legend=1 00:19:25.998 --rc geninfo_all_blocks=1 00:19:25.998 --rc geninfo_unexecuted_blocks=1 00:19:25.998 00:19:25.998 ' 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:25.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.998 --rc genhtml_branch_coverage=1 00:19:25.998 --rc genhtml_function_coverage=1 00:19:25.998 --rc genhtml_legend=1 00:19:25.998 --rc geninfo_all_blocks=1 00:19:25.998 --rc geninfo_unexecuted_blocks=1 00:19:25.998 00:19:25.998 ' 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:25.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.998 --rc genhtml_branch_coverage=1 00:19:25.998 --rc genhtml_function_coverage=1 00:19:25.998 --rc genhtml_legend=1 00:19:25.998 --rc geninfo_all_blocks=1 00:19:25.998 --rc geninfo_unexecuted_blocks=1 00:19:25.998 00:19:25.998 ' 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.998 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:25.999 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:32.596 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:32.596 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:32.596 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:32.596 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.596 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:32.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:19:32.597 00:19:32.597 --- 10.0.0.2 ping statistics --- 00:19:32.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.597 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:19:32.597 00:19:32.597 --- 10.0.0.1 ping statistics --- 00:19:32.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.597 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=896466 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 896466 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 896466 ']' 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.597 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=896811 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3c34b924b5b328cf45da9dca2d1a72bfe3565d3591926621 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KHY 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3c34b924b5b328cf45da9dca2d1a72bfe3565d3591926621 0 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3c34b924b5b328cf45da9dca2d1a72bfe3565d3591926621 0 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3c34b924b5b328cf45da9dca2d1a72bfe3565d3591926621 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KHY 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KHY 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.KHY 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f6c6517a4ba07085c925ba72c1f59506252f0d779a83edad41a6aadcc8e78f6a 00:19:33.167 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:33.168 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tRB 00:19:33.168 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f6c6517a4ba07085c925ba72c1f59506252f0d779a83edad41a6aadcc8e78f6a 3 00:19:33.168 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f6c6517a4ba07085c925ba72c1f59506252f0d779a83edad41a6aadcc8e78f6a 3 00:19:33.168 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.168 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.168 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f6c6517a4ba07085c925ba72c1f59506252f0d779a83edad41a6aadcc8e78f6a 00:19:33.168 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:33.168 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tRB 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tRB 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.tRB 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d61202ae2c9dce274e5ea5ee861f8a90 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.nFX 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d61202ae2c9dce274e5ea5ee861f8a90 1 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d61202ae2c9dce274e5ea5ee861f8a90 1 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d61202ae2c9dce274e5ea5ee861f8a90 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.nFX 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.nFX 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.nFX 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=52e4bb40e3f117702a7a62b832fbd82dfbe1053fffcc4d77 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:33.429 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cib 00:19:33.430 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 52e4bb40e3f117702a7a62b832fbd82dfbe1053fffcc4d77 2 00:19:33.430 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 52e4bb40e3f117702a7a62b832fbd82dfbe1053fffcc4d77 2 00:19:33.430 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.430 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.430 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=52e4bb40e3f117702a7a62b832fbd82dfbe1053fffcc4d77 00:19:33.430 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:33.430 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cib 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cib 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.cib 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c61a42efaa1905754869e1d1b78ef1fbb5ac1265413fa48e 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AnF 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c61a42efaa1905754869e1d1b78ef1fbb5ac1265413fa48e 2 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c61a42efaa1905754869e1d1b78ef1fbb5ac1265413fa48e 2 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c61a42efaa1905754869e1d1b78ef1fbb5ac1265413fa48e 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AnF 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AnF 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.AnF 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2a0dac69e31901512315fd9366e2eb44 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.awm 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2a0dac69e31901512315fd9366e2eb44 1 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2a0dac69e31901512315fd9366e2eb44 1 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2a0dac69e31901512315fd9366e2eb44 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:33.430 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.awm 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.awm 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.awm 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f7495f16a0430c40079bc4d9abff1d529a2ef7ceee079395b9104e3b293f679f 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Zjd 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f7495f16a0430c40079bc4d9abff1d529a2ef7ceee079395b9104e3b293f679f 3 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f7495f16a0430c40079bc4d9abff1d529a2ef7ceee079395b9104e3b293f679f 3 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f7495f16a0430c40079bc4d9abff1d529a2ef7ceee079395b9104e3b293f679f 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Zjd 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Zjd 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Zjd 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 896466 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 896466 ']' 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.692 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 896811 /var/tmp/host.sock 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 896811 ']' 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:33.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.953 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KHY 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.KHY 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.KHY 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.tRB ]] 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tRB 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tRB 00:19:34.215 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tRB 00:19:34.477 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:34.477 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nFX 00:19:34.477 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.477 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.477 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.477 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.nFX 00:19:34.477 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.nFX 00:19:34.738 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.cib ]] 00:19:34.738 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cib 00:19:34.738 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.738 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.738 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.738 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cib 00:19:34.738 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cib 00:19:34.999 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:34.999 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AnF 00:19:34.999 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.999 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.999 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.999 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AnF 00:19:34.999 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AnF 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.awm ]] 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.awm 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.awm 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.awm 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Zjd 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Zjd 00:19:35.262 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Zjd 00:19:35.523 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:35.523 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:35.523 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.523 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.523 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.523 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.785 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.047 00:19:36.047 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.047 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.047 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.309 { 00:19:36.309 "cntlid": 1, 00:19:36.309 "qid": 0, 00:19:36.309 "state": "enabled", 00:19:36.309 "thread": "nvmf_tgt_poll_group_000", 00:19:36.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.309 "listen_address": { 00:19:36.309 "trtype": "TCP", 00:19:36.309 "adrfam": "IPv4", 00:19:36.309 "traddr": "10.0.0.2", 00:19:36.309 "trsvcid": "4420" 00:19:36.309 }, 00:19:36.309 "peer_address": { 00:19:36.309 "trtype": "TCP", 00:19:36.309 "adrfam": "IPv4", 00:19:36.309 "traddr": "10.0.0.1", 00:19:36.309 "trsvcid": "41172" 00:19:36.309 }, 00:19:36.309 "auth": { 00:19:36.309 "state": "completed", 00:19:36.309 "digest": "sha256", 00:19:36.309 "dhgroup": "null" 00:19:36.309 } 00:19:36.309 } 00:19:36.309 ]' 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.309 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.569 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:19:36.569 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:19:37.140 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.140 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.140 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.140 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.140 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.140 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.140 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.140 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.400 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:37.400 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.400 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.400 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:37.400 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.401 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.401 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.401 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.401 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.401 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.401 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.401 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.401 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.661 00:19:37.661 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.661 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.661 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.921 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.921 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.921 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.921 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.921 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.922 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.922 { 00:19:37.922 "cntlid": 3, 00:19:37.922 "qid": 0, 00:19:37.922 "state": "enabled", 00:19:37.922 "thread": "nvmf_tgt_poll_group_000", 00:19:37.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:37.922 "listen_address": { 00:19:37.922 "trtype": "TCP", 00:19:37.922 "adrfam": "IPv4", 00:19:37.922 "traddr": "10.0.0.2", 00:19:37.922 "trsvcid": "4420" 00:19:37.922 }, 00:19:37.922 "peer_address": { 00:19:37.922 "trtype": "TCP", 00:19:37.922 "adrfam": "IPv4", 00:19:37.922 "traddr": "10.0.0.1", 00:19:37.922 "trsvcid": "41182" 00:19:37.922 }, 00:19:37.922 "auth": { 00:19:37.922 "state": "completed", 00:19:37.922 "digest": "sha256", 00:19:37.922 "dhgroup": "null" 00:19:37.922 } 00:19:37.922 } 00:19:37.922 ]' 00:19:37.922 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.922 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.922 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.922 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:37.922 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.922 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.922 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.922 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.182 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:19:38.182 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:19:38.753 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.753 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.753 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.753 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.753 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.753 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.753 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.753 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.013 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:39.013 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.013 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.013 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:39.013 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.013 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.014 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.014 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.014 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.014 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.014 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.014 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.014 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.274 00:19:39.274 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.274 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.274 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.534 { 00:19:39.534 "cntlid": 5, 00:19:39.534 "qid": 0, 00:19:39.534 "state": "enabled", 00:19:39.534 "thread": "nvmf_tgt_poll_group_000", 00:19:39.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.534 "listen_address": { 00:19:39.534 "trtype": "TCP", 00:19:39.534 "adrfam": "IPv4", 00:19:39.534 "traddr": "10.0.0.2", 00:19:39.534 "trsvcid": "4420" 00:19:39.534 }, 00:19:39.534 "peer_address": { 00:19:39.534 "trtype": "TCP", 00:19:39.534 "adrfam": "IPv4", 00:19:39.534 "traddr": "10.0.0.1", 00:19:39.534 "trsvcid": "41208" 00:19:39.534 }, 00:19:39.534 "auth": { 00:19:39.534 "state": "completed", 00:19:39.534 "digest": "sha256", 00:19:39.534 "dhgroup": "null" 00:19:39.534 } 00:19:39.534 } 00:19:39.534 ]' 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.534 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.794 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:19:39.794 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:19:40.365 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.365 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.365 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.365 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.625 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.887 00:19:40.887 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.887 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.887 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.147 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.147 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.147 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.147 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.147 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.147 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.147 { 00:19:41.147 "cntlid": 7, 00:19:41.147 "qid": 0, 00:19:41.147 "state": "enabled", 00:19:41.147 "thread": "nvmf_tgt_poll_group_000", 00:19:41.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.147 "listen_address": { 00:19:41.147 "trtype": "TCP", 00:19:41.147 "adrfam": "IPv4", 00:19:41.147 "traddr": "10.0.0.2", 00:19:41.147 "trsvcid": "4420" 00:19:41.147 }, 00:19:41.147 "peer_address": { 00:19:41.147 "trtype": "TCP", 00:19:41.147 "adrfam": "IPv4", 00:19:41.147 "traddr": "10.0.0.1", 00:19:41.147 "trsvcid": "41238" 00:19:41.147 }, 00:19:41.147 "auth": { 00:19:41.147 "state": "completed", 00:19:41.147 "digest": "sha256", 00:19:41.147 "dhgroup": "null" 00:19:41.147 } 00:19:41.147 } 00:19:41.147 ]' 00:19:41.147 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.147 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.147 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.148 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:41.148 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.148 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.148 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.148 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.408 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:19:41.408 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:19:41.978 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.978 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.978 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.978 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.978 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.978 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.978 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.978 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.978 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.238 13:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.500 00:19:42.500 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.500 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.500 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.760 { 00:19:42.760 "cntlid": 9, 00:19:42.760 "qid": 0, 00:19:42.760 "state": "enabled", 00:19:42.760 "thread": "nvmf_tgt_poll_group_000", 00:19:42.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:42.760 "listen_address": { 00:19:42.760 "trtype": "TCP", 00:19:42.760 "adrfam": "IPv4", 00:19:42.760 "traddr": "10.0.0.2", 00:19:42.760 "trsvcid": "4420" 00:19:42.760 }, 00:19:42.760 "peer_address": { 00:19:42.760 "trtype": "TCP", 00:19:42.760 "adrfam": "IPv4", 00:19:42.760 "traddr": "10.0.0.1", 00:19:42.760 "trsvcid": "40334" 00:19:42.760 }, 00:19:42.760 "auth": { 00:19:42.760 "state": "completed", 00:19:42.760 "digest": "sha256", 00:19:42.760 "dhgroup": "ffdhe2048" 00:19:42.760 } 00:19:42.760 } 00:19:42.760 ]' 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.760 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.021 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:19:43.021 13:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:19:43.592 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.592 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.592 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.592 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.592 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.592 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.592 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.592 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.854 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.115 00:19:44.115 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.115 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.115 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.375 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.376 { 00:19:44.376 "cntlid": 11, 00:19:44.376 "qid": 0, 00:19:44.376 "state": "enabled", 00:19:44.376 "thread": "nvmf_tgt_poll_group_000", 00:19:44.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:44.376 "listen_address": { 00:19:44.376 "trtype": "TCP", 00:19:44.376 "adrfam": "IPv4", 00:19:44.376 "traddr": "10.0.0.2", 00:19:44.376 "trsvcid": "4420" 00:19:44.376 }, 00:19:44.376 "peer_address": { 00:19:44.376 "trtype": "TCP", 00:19:44.376 "adrfam": "IPv4", 00:19:44.376 "traddr": "10.0.0.1", 00:19:44.376 "trsvcid": "40356" 00:19:44.376 }, 00:19:44.376 "auth": { 00:19:44.376 "state": "completed", 00:19:44.376 "digest": "sha256", 00:19:44.376 "dhgroup": "ffdhe2048" 00:19:44.376 } 00:19:44.376 } 00:19:44.376 ]' 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.376 13:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.636 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:19:44.636 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:19:45.207 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.208 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.208 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.208 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.208 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.208 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.208 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.208 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.468 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.727 00:19:45.728 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.728 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.728 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.989 { 00:19:45.989 "cntlid": 13, 00:19:45.989 "qid": 0, 00:19:45.989 "state": "enabled", 00:19:45.989 "thread": "nvmf_tgt_poll_group_000", 00:19:45.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.989 "listen_address": { 00:19:45.989 "trtype": "TCP", 00:19:45.989 "adrfam": "IPv4", 00:19:45.989 "traddr": "10.0.0.2", 00:19:45.989 "trsvcid": "4420" 00:19:45.989 }, 00:19:45.989 "peer_address": { 00:19:45.989 "trtype": "TCP", 00:19:45.989 "adrfam": "IPv4", 00:19:45.989 "traddr": "10.0.0.1", 00:19:45.989 "trsvcid": "40378" 00:19:45.989 }, 00:19:45.989 "auth": { 00:19:45.989 "state": "completed", 00:19:45.989 "digest": "sha256", 00:19:45.989 "dhgroup": "ffdhe2048" 00:19:45.989 } 00:19:45.989 } 00:19:45.989 ]' 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.989 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.250 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:19:46.250 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:19:46.823 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.823 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.823 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.823 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.823 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.823 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.823 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.823 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.084 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.085 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.346 00:19:47.346 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.346 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.346 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.608 { 00:19:47.608 "cntlid": 15, 00:19:47.608 "qid": 0, 00:19:47.608 "state": "enabled", 00:19:47.608 "thread": "nvmf_tgt_poll_group_000", 00:19:47.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:47.608 "listen_address": { 00:19:47.608 "trtype": "TCP", 00:19:47.608 "adrfam": "IPv4", 00:19:47.608 "traddr": "10.0.0.2", 00:19:47.608 "trsvcid": "4420" 00:19:47.608 }, 00:19:47.608 "peer_address": { 00:19:47.608 "trtype": "TCP", 00:19:47.608 "adrfam": "IPv4", 00:19:47.608 "traddr": "10.0.0.1", 00:19:47.608 "trsvcid": "40412" 00:19:47.608 }, 00:19:47.608 "auth": { 00:19:47.608 "state": "completed", 00:19:47.608 "digest": "sha256", 00:19:47.608 "dhgroup": "ffdhe2048" 00:19:47.608 } 00:19:47.608 } 00:19:47.608 ]' 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.608 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.868 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:19:47.868 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:19:48.437 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.437 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.437 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.437 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.697 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.697 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.697 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.697 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.698 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.017 00:19:49.017 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.017 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.017 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.318 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.318 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.318 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.318 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.318 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.318 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.318 { 00:19:49.318 "cntlid": 17, 00:19:49.318 "qid": 0, 00:19:49.318 "state": "enabled", 00:19:49.318 "thread": "nvmf_tgt_poll_group_000", 00:19:49.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.318 "listen_address": { 00:19:49.318 "trtype": "TCP", 00:19:49.319 "adrfam": "IPv4", 00:19:49.319 "traddr": "10.0.0.2", 00:19:49.319 "trsvcid": "4420" 00:19:49.319 }, 00:19:49.319 "peer_address": { 00:19:49.319 "trtype": "TCP", 00:19:49.319 "adrfam": "IPv4", 00:19:49.319 "traddr": "10.0.0.1", 00:19:49.319 "trsvcid": "40446" 00:19:49.319 }, 00:19:49.319 "auth": { 00:19:49.319 "state": "completed", 00:19:49.319 "digest": "sha256", 00:19:49.319 "dhgroup": "ffdhe3072" 00:19:49.319 } 00:19:49.319 } 00:19:49.319 ]' 00:19:49.319 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.319 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.319 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.319 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.319 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.319 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.319 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.319 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.579 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:19:49.579 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:19:50.149 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.149 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.149 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.149 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.149 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.149 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.149 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.149 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.410 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.670 00:19:50.670 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.671 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.671 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.931 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.931 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.931 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.931 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.931 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.931 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.931 { 00:19:50.931 "cntlid": 19, 00:19:50.931 "qid": 0, 00:19:50.931 "state": "enabled", 00:19:50.931 "thread": "nvmf_tgt_poll_group_000", 00:19:50.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:50.931 "listen_address": { 00:19:50.931 "trtype": "TCP", 00:19:50.931 "adrfam": "IPv4", 00:19:50.931 "traddr": "10.0.0.2", 00:19:50.931 "trsvcid": "4420" 00:19:50.931 }, 00:19:50.931 "peer_address": { 00:19:50.931 "trtype": "TCP", 00:19:50.931 "adrfam": "IPv4", 00:19:50.931 "traddr": "10.0.0.1", 00:19:50.931 "trsvcid": "40470" 00:19:50.931 }, 00:19:50.931 "auth": { 00:19:50.931 "state": "completed", 00:19:50.931 "digest": "sha256", 00:19:50.931 "dhgroup": "ffdhe3072" 00:19:50.931 } 00:19:50.931 } 00:19:50.931 ]' 00:19:50.931 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.931 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.932 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.932 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.932 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.932 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.932 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.932 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.192 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:19:51.192 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:19:51.763 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.763 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.763 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.763 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.763 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.763 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.763 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:51.763 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.023 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:52.023 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.023 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.023 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:52.024 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.024 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.024 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.024 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.024 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.024 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.024 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.024 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.024 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.284 00:19:52.284 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.284 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.284 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.545 { 00:19:52.545 "cntlid": 21, 00:19:52.545 "qid": 0, 00:19:52.545 "state": "enabled", 00:19:52.545 "thread": "nvmf_tgt_poll_group_000", 00:19:52.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.545 "listen_address": { 00:19:52.545 "trtype": "TCP", 00:19:52.545 "adrfam": "IPv4", 00:19:52.545 "traddr": "10.0.0.2", 00:19:52.545 "trsvcid": "4420" 00:19:52.545 }, 00:19:52.545 "peer_address": { 00:19:52.545 "trtype": "TCP", 00:19:52.545 "adrfam": "IPv4", 00:19:52.545 "traddr": "10.0.0.1", 00:19:52.545 "trsvcid": "53850" 00:19:52.545 }, 00:19:52.545 "auth": { 00:19:52.545 "state": "completed", 00:19:52.545 "digest": "sha256", 00:19:52.545 "dhgroup": "ffdhe3072" 00:19:52.545 } 00:19:52.545 } 00:19:52.545 ]' 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.545 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.804 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:19:52.804 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:19:53.375 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.375 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.375 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.375 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.375 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.375 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.375 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.375 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.636 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.896 00:19:53.896 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.896 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.896 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.157 { 00:19:54.157 "cntlid": 23, 00:19:54.157 "qid": 0, 00:19:54.157 "state": "enabled", 00:19:54.157 "thread": "nvmf_tgt_poll_group_000", 00:19:54.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:54.157 "listen_address": { 00:19:54.157 "trtype": "TCP", 00:19:54.157 "adrfam": "IPv4", 00:19:54.157 "traddr": "10.0.0.2", 00:19:54.157 "trsvcid": "4420" 00:19:54.157 }, 00:19:54.157 "peer_address": { 00:19:54.157 "trtype": "TCP", 00:19:54.157 "adrfam": "IPv4", 00:19:54.157 "traddr": "10.0.0.1", 00:19:54.157 "trsvcid": "53870" 00:19:54.157 }, 00:19:54.157 "auth": { 00:19:54.157 "state": "completed", 00:19:54.157 "digest": "sha256", 00:19:54.157 "dhgroup": "ffdhe3072" 00:19:54.157 } 00:19:54.157 } 00:19:54.157 ]' 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.157 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.417 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:19:54.417 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:19:54.989 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.989 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.989 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.989 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.989 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.989 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.989 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.989 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.989 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.251 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.512 00:19:55.512 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.512 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.512 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.773 { 00:19:55.773 "cntlid": 25, 00:19:55.773 "qid": 0, 00:19:55.773 "state": "enabled", 00:19:55.773 "thread": "nvmf_tgt_poll_group_000", 00:19:55.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:55.773 "listen_address": { 00:19:55.773 "trtype": "TCP", 00:19:55.773 "adrfam": "IPv4", 00:19:55.773 "traddr": "10.0.0.2", 00:19:55.773 "trsvcid": "4420" 00:19:55.773 }, 00:19:55.773 "peer_address": { 00:19:55.773 "trtype": "TCP", 00:19:55.773 "adrfam": "IPv4", 00:19:55.773 "traddr": "10.0.0.1", 00:19:55.773 "trsvcid": "53884" 00:19:55.773 }, 00:19:55.773 "auth": { 00:19:55.773 "state": "completed", 00:19:55.773 "digest": "sha256", 00:19:55.773 "dhgroup": "ffdhe4096" 00:19:55.773 } 00:19:55.773 } 00:19:55.773 ]' 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.034 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:19:56.034 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:19:56.606 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.606 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.606 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.606 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.606 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.606 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.606 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.606 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.867 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.127 00:19:57.127 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.127 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.127 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.387 { 00:19:57.387 "cntlid": 27, 00:19:57.387 "qid": 0, 00:19:57.387 "state": "enabled", 00:19:57.387 "thread": "nvmf_tgt_poll_group_000", 00:19:57.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:57.387 "listen_address": { 00:19:57.387 "trtype": "TCP", 00:19:57.387 "adrfam": "IPv4", 00:19:57.387 "traddr": "10.0.0.2", 00:19:57.387 "trsvcid": "4420" 00:19:57.387 }, 00:19:57.387 "peer_address": { 00:19:57.387 "trtype": "TCP", 00:19:57.387 "adrfam": "IPv4", 00:19:57.387 "traddr": "10.0.0.1", 00:19:57.387 "trsvcid": "53900" 00:19:57.387 }, 00:19:57.387 "auth": { 00:19:57.387 "state": "completed", 00:19:57.387 "digest": "sha256", 00:19:57.387 "dhgroup": "ffdhe4096" 00:19:57.387 } 00:19:57.387 } 00:19:57.387 ]' 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.387 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.387 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.387 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.387 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.647 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:19:57.647 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:19:58.218 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.218 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.218 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.218 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.218 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.218 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.218 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.218 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.480 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.741 00:19:58.742 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.742 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.742 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.002 { 00:19:59.002 "cntlid": 29, 00:19:59.002 "qid": 0, 00:19:59.002 "state": "enabled", 00:19:59.002 "thread": "nvmf_tgt_poll_group_000", 00:19:59.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:59.002 "listen_address": { 00:19:59.002 "trtype": "TCP", 00:19:59.002 "adrfam": "IPv4", 00:19:59.002 "traddr": "10.0.0.2", 00:19:59.002 "trsvcid": "4420" 00:19:59.002 }, 00:19:59.002 "peer_address": { 00:19:59.002 "trtype": "TCP", 00:19:59.002 "adrfam": "IPv4", 00:19:59.002 "traddr": "10.0.0.1", 00:19:59.002 "trsvcid": "53928" 00:19:59.002 }, 00:19:59.002 "auth": { 00:19:59.002 "state": "completed", 00:19:59.002 "digest": "sha256", 00:19:59.002 "dhgroup": "ffdhe4096" 00:19:59.002 } 00:19:59.002 } 00:19:59.002 ]' 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.002 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.262 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:19:59.262 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:19:59.833 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.833 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.833 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.833 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.094 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.355 00:20:00.355 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.355 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.355 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.614 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.614 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.614 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.614 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.614 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.614 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.615 { 00:20:00.615 "cntlid": 31, 00:20:00.615 "qid": 0, 00:20:00.615 "state": "enabled", 00:20:00.615 "thread": "nvmf_tgt_poll_group_000", 00:20:00.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:00.615 "listen_address": { 00:20:00.615 "trtype": "TCP", 00:20:00.615 "adrfam": "IPv4", 00:20:00.615 "traddr": "10.0.0.2", 00:20:00.615 "trsvcid": "4420" 00:20:00.615 }, 00:20:00.615 "peer_address": { 00:20:00.615 "trtype": "TCP", 00:20:00.615 "adrfam": "IPv4", 00:20:00.615 "traddr": "10.0.0.1", 00:20:00.615 "trsvcid": "53952" 00:20:00.615 }, 00:20:00.615 "auth": { 00:20:00.615 "state": "completed", 00:20:00.615 "digest": "sha256", 00:20:00.615 "dhgroup": "ffdhe4096" 00:20:00.615 } 00:20:00.615 } 00:20:00.615 ]' 00:20:00.615 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.615 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.615 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.615 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.615 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.615 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.615 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.615 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.874 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:00.874 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:01.444 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.706 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.277 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.277 { 00:20:02.277 "cntlid": 33, 00:20:02.277 "qid": 0, 00:20:02.277 "state": "enabled", 00:20:02.277 "thread": "nvmf_tgt_poll_group_000", 00:20:02.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:02.277 "listen_address": { 00:20:02.277 "trtype": "TCP", 00:20:02.277 "adrfam": "IPv4", 00:20:02.277 "traddr": "10.0.0.2", 00:20:02.277 "trsvcid": "4420" 00:20:02.277 }, 00:20:02.277 "peer_address": { 00:20:02.277 "trtype": "TCP", 00:20:02.277 "adrfam": "IPv4", 00:20:02.277 "traddr": "10.0.0.1", 00:20:02.277 "trsvcid": "53980" 00:20:02.277 }, 00:20:02.277 "auth": { 00:20:02.277 "state": "completed", 00:20:02.277 "digest": "sha256", 00:20:02.277 "dhgroup": "ffdhe6144" 00:20:02.277 } 00:20:02.277 } 00:20:02.277 ]' 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.277 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.537 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.537 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.537 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.537 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.537 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.537 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:02.537 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:03.476 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.476 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:03.476 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.476 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.476 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.476 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.476 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.476 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.476 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.477 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.736 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.997 { 00:20:03.997 "cntlid": 35, 00:20:03.997 "qid": 0, 00:20:03.997 "state": "enabled", 00:20:03.997 "thread": "nvmf_tgt_poll_group_000", 00:20:03.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:03.997 "listen_address": { 00:20:03.997 "trtype": "TCP", 00:20:03.997 "adrfam": "IPv4", 00:20:03.997 "traddr": "10.0.0.2", 00:20:03.997 "trsvcid": "4420" 00:20:03.997 }, 00:20:03.997 "peer_address": { 00:20:03.997 "trtype": "TCP", 00:20:03.997 "adrfam": "IPv4", 00:20:03.997 "traddr": "10.0.0.1", 00:20:03.997 "trsvcid": "53892" 00:20:03.997 }, 00:20:03.997 "auth": { 00:20:03.997 "state": "completed", 00:20:03.997 "digest": "sha256", 00:20:03.997 "dhgroup": "ffdhe6144" 00:20:03.997 } 00:20:03.997 } 00:20:03.997 ]' 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.997 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.998 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.258 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.258 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.258 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.258 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.258 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.258 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:04.258 13:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.200 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.461 00:20:05.461 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.461 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.461 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.723 { 00:20:05.723 "cntlid": 37, 00:20:05.723 "qid": 0, 00:20:05.723 "state": "enabled", 00:20:05.723 "thread": "nvmf_tgt_poll_group_000", 00:20:05.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:05.723 "listen_address": { 00:20:05.723 "trtype": "TCP", 00:20:05.723 "adrfam": "IPv4", 00:20:05.723 "traddr": "10.0.0.2", 00:20:05.723 "trsvcid": "4420" 00:20:05.723 }, 00:20:05.723 "peer_address": { 00:20:05.723 "trtype": "TCP", 00:20:05.723 "adrfam": "IPv4", 00:20:05.723 "traddr": "10.0.0.1", 00:20:05.723 "trsvcid": "53920" 00:20:05.723 }, 00:20:05.723 "auth": { 00:20:05.723 "state": "completed", 00:20:05.723 "digest": "sha256", 00:20:05.723 "dhgroup": "ffdhe6144" 00:20:05.723 } 00:20:05.723 } 00:20:05.723 ]' 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.723 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.984 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.984 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.984 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.984 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.984 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:05.984 13:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.926 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.927 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.187 00:20:07.187 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.187 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.187 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.448 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.448 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.448 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.448 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.448 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.448 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.449 { 00:20:07.449 "cntlid": 39, 00:20:07.449 "qid": 0, 00:20:07.449 "state": "enabled", 00:20:07.449 "thread": "nvmf_tgt_poll_group_000", 00:20:07.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:07.449 "listen_address": { 00:20:07.449 "trtype": "TCP", 00:20:07.449 "adrfam": "IPv4", 00:20:07.449 "traddr": "10.0.0.2", 00:20:07.449 "trsvcid": "4420" 00:20:07.449 }, 00:20:07.449 "peer_address": { 00:20:07.449 "trtype": "TCP", 00:20:07.449 "adrfam": "IPv4", 00:20:07.449 "traddr": "10.0.0.1", 00:20:07.449 "trsvcid": "53954" 00:20:07.449 }, 00:20:07.449 "auth": { 00:20:07.449 "state": "completed", 00:20:07.449 "digest": "sha256", 00:20:07.449 "dhgroup": "ffdhe6144" 00:20:07.449 } 00:20:07.449 } 00:20:07.449 ]' 00:20:07.449 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.449 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.449 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.449 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.449 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.709 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.709 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.709 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.709 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:07.709 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:08.651 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.651 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.223 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.223 { 00:20:09.223 "cntlid": 41, 00:20:09.223 "qid": 0, 00:20:09.223 "state": "enabled", 00:20:09.223 "thread": "nvmf_tgt_poll_group_000", 00:20:09.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:09.223 "listen_address": { 00:20:09.223 "trtype": "TCP", 00:20:09.223 "adrfam": "IPv4", 00:20:09.223 "traddr": "10.0.0.2", 00:20:09.223 "trsvcid": "4420" 00:20:09.223 }, 00:20:09.223 "peer_address": { 00:20:09.223 "trtype": "TCP", 00:20:09.223 "adrfam": "IPv4", 00:20:09.223 "traddr": "10.0.0.1", 00:20:09.223 "trsvcid": "53988" 00:20:09.223 }, 00:20:09.223 "auth": { 00:20:09.223 "state": "completed", 00:20:09.223 "digest": "sha256", 00:20:09.223 "dhgroup": "ffdhe8192" 00:20:09.223 } 00:20:09.223 } 00:20:09.223 ]' 00:20:09.223 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.484 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.484 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.484 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.484 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.484 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.484 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.484 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.744 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:09.744 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:10.314 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.314 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.314 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.314 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.314 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.314 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.314 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.314 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.573 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.143 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.143 { 00:20:11.143 "cntlid": 43, 00:20:11.143 "qid": 0, 00:20:11.143 "state": "enabled", 00:20:11.143 "thread": "nvmf_tgt_poll_group_000", 00:20:11.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:11.143 "listen_address": { 00:20:11.143 "trtype": "TCP", 00:20:11.143 "adrfam": "IPv4", 00:20:11.143 "traddr": "10.0.0.2", 00:20:11.143 "trsvcid": "4420" 00:20:11.143 }, 00:20:11.143 "peer_address": { 00:20:11.143 "trtype": "TCP", 00:20:11.143 "adrfam": "IPv4", 00:20:11.143 "traddr": "10.0.0.1", 00:20:11.143 "trsvcid": "54008" 00:20:11.143 }, 00:20:11.143 "auth": { 00:20:11.143 "state": "completed", 00:20:11.143 "digest": "sha256", 00:20:11.143 "dhgroup": "ffdhe8192" 00:20:11.143 } 00:20:11.143 } 00:20:11.143 ]' 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.143 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.402 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.402 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.402 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.402 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.402 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.402 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:11.402 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.341 13:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.911 00:20:12.911 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.911 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.911 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.171 { 00:20:13.171 "cntlid": 45, 00:20:13.171 "qid": 0, 00:20:13.171 "state": "enabled", 00:20:13.171 "thread": "nvmf_tgt_poll_group_000", 00:20:13.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:13.171 "listen_address": { 00:20:13.171 "trtype": "TCP", 00:20:13.171 "adrfam": "IPv4", 00:20:13.171 "traddr": "10.0.0.2", 00:20:13.171 "trsvcid": "4420" 00:20:13.171 }, 00:20:13.171 "peer_address": { 00:20:13.171 "trtype": "TCP", 00:20:13.171 "adrfam": "IPv4", 00:20:13.171 "traddr": "10.0.0.1", 00:20:13.171 "trsvcid": "48280" 00:20:13.171 }, 00:20:13.171 "auth": { 00:20:13.171 "state": "completed", 00:20:13.171 "digest": "sha256", 00:20:13.171 "dhgroup": "ffdhe8192" 00:20:13.171 } 00:20:13.171 } 00:20:13.171 ]' 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.171 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.431 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:13.431 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:14.001 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.001 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.001 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.001 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.001 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.001 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.001 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.001 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.262 13:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.833 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.833 { 00:20:14.833 "cntlid": 47, 00:20:14.833 "qid": 0, 00:20:14.833 "state": "enabled", 00:20:14.833 "thread": "nvmf_tgt_poll_group_000", 00:20:14.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:14.833 "listen_address": { 00:20:14.833 "trtype": "TCP", 00:20:14.833 "adrfam": "IPv4", 00:20:14.833 "traddr": "10.0.0.2", 00:20:14.833 "trsvcid": "4420" 00:20:14.833 }, 00:20:14.833 "peer_address": { 00:20:14.833 "trtype": "TCP", 00:20:14.833 "adrfam": "IPv4", 00:20:14.833 "traddr": "10.0.0.1", 00:20:14.833 "trsvcid": "48308" 00:20:14.833 }, 00:20:14.833 "auth": { 00:20:14.833 "state": "completed", 00:20:14.833 "digest": "sha256", 00:20:14.833 "dhgroup": "ffdhe8192" 00:20:14.833 } 00:20:14.833 } 00:20:14.833 ]' 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.833 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.094 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:15.094 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.094 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.094 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.094 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.094 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:15.094 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.035 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.036 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.036 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.036 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.036 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.036 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.036 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.036 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.036 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.297 00:20:16.297 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.297 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.297 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.558 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.558 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.558 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.558 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.558 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.558 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.558 { 00:20:16.558 "cntlid": 49, 00:20:16.558 "qid": 0, 00:20:16.558 "state": "enabled", 00:20:16.558 "thread": "nvmf_tgt_poll_group_000", 00:20:16.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:16.558 "listen_address": { 00:20:16.558 "trtype": "TCP", 00:20:16.558 "adrfam": "IPv4", 00:20:16.558 "traddr": "10.0.0.2", 00:20:16.558 "trsvcid": "4420" 00:20:16.558 }, 00:20:16.558 "peer_address": { 00:20:16.559 "trtype": "TCP", 00:20:16.559 "adrfam": "IPv4", 00:20:16.559 "traddr": "10.0.0.1", 00:20:16.559 "trsvcid": "48346" 00:20:16.559 }, 00:20:16.559 "auth": { 00:20:16.559 "state": "completed", 00:20:16.559 "digest": "sha384", 00:20:16.559 "dhgroup": "null" 00:20:16.559 } 00:20:16.559 } 00:20:16.559 ]' 00:20:16.559 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.559 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.559 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.559 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.559 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.559 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.559 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.559 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.820 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:16.820 13:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:17.391 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.391 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.391 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.391 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.391 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.392 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.392 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.392 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.652 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:17.652 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.652 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.652 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:17.652 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.652 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.652 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.653 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.653 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.653 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.653 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.653 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.653 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.914 00:20:17.914 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.914 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.914 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.175 { 00:20:18.175 "cntlid": 51, 00:20:18.175 "qid": 0, 00:20:18.175 "state": "enabled", 00:20:18.175 "thread": "nvmf_tgt_poll_group_000", 00:20:18.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:18.175 "listen_address": { 00:20:18.175 "trtype": "TCP", 00:20:18.175 "adrfam": "IPv4", 00:20:18.175 "traddr": "10.0.0.2", 00:20:18.175 "trsvcid": "4420" 00:20:18.175 }, 00:20:18.175 "peer_address": { 00:20:18.175 "trtype": "TCP", 00:20:18.175 "adrfam": "IPv4", 00:20:18.175 "traddr": "10.0.0.1", 00:20:18.175 "trsvcid": "48362" 00:20:18.175 }, 00:20:18.175 "auth": { 00:20:18.175 "state": "completed", 00:20:18.175 "digest": "sha384", 00:20:18.175 "dhgroup": "null" 00:20:18.175 } 00:20:18.175 } 00:20:18.175 ]' 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.176 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.176 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.176 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.176 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.436 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:18.436 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:19.009 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.009 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.009 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.009 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.268 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.269 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.269 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.269 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.269 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.269 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.528 00:20:19.528 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.528 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.528 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.788 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.788 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.788 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.788 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.788 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.788 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.788 { 00:20:19.788 "cntlid": 53, 00:20:19.788 "qid": 0, 00:20:19.788 "state": "enabled", 00:20:19.788 "thread": "nvmf_tgt_poll_group_000", 00:20:19.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:19.788 "listen_address": { 00:20:19.788 "trtype": "TCP", 00:20:19.788 "adrfam": "IPv4", 00:20:19.788 "traddr": "10.0.0.2", 00:20:19.789 "trsvcid": "4420" 00:20:19.789 }, 00:20:19.789 "peer_address": { 00:20:19.789 "trtype": "TCP", 00:20:19.789 "adrfam": "IPv4", 00:20:19.789 "traddr": "10.0.0.1", 00:20:19.789 "trsvcid": "48400" 00:20:19.789 }, 00:20:19.789 "auth": { 00:20:19.789 "state": "completed", 00:20:19.789 "digest": "sha384", 00:20:19.789 "dhgroup": "null" 00:20:19.789 } 00:20:19.789 } 00:20:19.789 ]' 00:20:19.789 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.789 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.789 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.789 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:19.789 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.789 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.789 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.789 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.048 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:20.048 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:20.620 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.620 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:20.620 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.620 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.620 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.620 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.620 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.620 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.880 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:20.880 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.880 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.880 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.880 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.880 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.881 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:20.881 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.881 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.881 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.881 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.881 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.881 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.142 00:20:21.142 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.142 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.142 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.403 { 00:20:21.403 "cntlid": 55, 00:20:21.403 "qid": 0, 00:20:21.403 "state": "enabled", 00:20:21.403 "thread": "nvmf_tgt_poll_group_000", 00:20:21.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:21.403 "listen_address": { 00:20:21.403 "trtype": "TCP", 00:20:21.403 "adrfam": "IPv4", 00:20:21.403 "traddr": "10.0.0.2", 00:20:21.403 "trsvcid": "4420" 00:20:21.403 }, 00:20:21.403 "peer_address": { 00:20:21.403 "trtype": "TCP", 00:20:21.403 "adrfam": "IPv4", 00:20:21.403 "traddr": "10.0.0.1", 00:20:21.403 "trsvcid": "48430" 00:20:21.403 }, 00:20:21.403 "auth": { 00:20:21.403 "state": "completed", 00:20:21.403 "digest": "sha384", 00:20:21.403 "dhgroup": "null" 00:20:21.403 } 00:20:21.403 } 00:20:21.403 ]' 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.403 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.403 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.403 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.403 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.663 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:21.663 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:22.233 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.233 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.233 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.233 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.233 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.233 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.233 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.233 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.233 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.493 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:22.493 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.493 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.493 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.493 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.494 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.494 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.494 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.494 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.494 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.494 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.494 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.494 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.754 00:20:22.754 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.754 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.754 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.016 { 00:20:23.016 "cntlid": 57, 00:20:23.016 "qid": 0, 00:20:23.016 "state": "enabled", 00:20:23.016 "thread": "nvmf_tgt_poll_group_000", 00:20:23.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:23.016 "listen_address": { 00:20:23.016 "trtype": "TCP", 00:20:23.016 "adrfam": "IPv4", 00:20:23.016 "traddr": "10.0.0.2", 00:20:23.016 "trsvcid": "4420" 00:20:23.016 }, 00:20:23.016 "peer_address": { 00:20:23.016 "trtype": "TCP", 00:20:23.016 "adrfam": "IPv4", 00:20:23.016 "traddr": "10.0.0.1", 00:20:23.016 "trsvcid": "47688" 00:20:23.016 }, 00:20:23.016 "auth": { 00:20:23.016 "state": "completed", 00:20:23.016 "digest": "sha384", 00:20:23.016 "dhgroup": "ffdhe2048" 00:20:23.016 } 00:20:23.016 } 00:20:23.016 ]' 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.016 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.277 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:23.277 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:23.847 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.847 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:23.847 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.847 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.847 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.847 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.847 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.847 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.107 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.370 00:20:24.370 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.370 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.370 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.632 { 00:20:24.632 "cntlid": 59, 00:20:24.632 "qid": 0, 00:20:24.632 "state": "enabled", 00:20:24.632 "thread": "nvmf_tgt_poll_group_000", 00:20:24.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:24.632 "listen_address": { 00:20:24.632 "trtype": "TCP", 00:20:24.632 "adrfam": "IPv4", 00:20:24.632 "traddr": "10.0.0.2", 00:20:24.632 "trsvcid": "4420" 00:20:24.632 }, 00:20:24.632 "peer_address": { 00:20:24.632 "trtype": "TCP", 00:20:24.632 "adrfam": "IPv4", 00:20:24.632 "traddr": "10.0.0.1", 00:20:24.632 "trsvcid": "47700" 00:20:24.632 }, 00:20:24.632 "auth": { 00:20:24.632 "state": "completed", 00:20:24.632 "digest": "sha384", 00:20:24.632 "dhgroup": "ffdhe2048" 00:20:24.632 } 00:20:24.632 } 00:20:24.632 ]' 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.632 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.633 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.894 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:24.894 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:25.468 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.468 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.468 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.468 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.468 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.468 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.468 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.468 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.731 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.993 00:20:25.993 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.993 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.993 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.255 { 00:20:26.255 "cntlid": 61, 00:20:26.255 "qid": 0, 00:20:26.255 "state": "enabled", 00:20:26.255 "thread": "nvmf_tgt_poll_group_000", 00:20:26.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:26.255 "listen_address": { 00:20:26.255 "trtype": "TCP", 00:20:26.255 "adrfam": "IPv4", 00:20:26.255 "traddr": "10.0.0.2", 00:20:26.255 "trsvcid": "4420" 00:20:26.255 }, 00:20:26.255 "peer_address": { 00:20:26.255 "trtype": "TCP", 00:20:26.255 "adrfam": "IPv4", 00:20:26.255 "traddr": "10.0.0.1", 00:20:26.255 "trsvcid": "47734" 00:20:26.255 }, 00:20:26.255 "auth": { 00:20:26.255 "state": "completed", 00:20:26.255 "digest": "sha384", 00:20:26.255 "dhgroup": "ffdhe2048" 00:20:26.255 } 00:20:26.255 } 00:20:26.255 ]' 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.255 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.515 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:26.515 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:27.087 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.087 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.087 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.087 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.087 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.087 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.087 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.087 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.347 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:27.347 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.347 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.348 13:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.700 00:20:27.700 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.700 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.700 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.700 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.700 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.701 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.701 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.701 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.701 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.701 { 00:20:27.701 "cntlid": 63, 00:20:27.701 "qid": 0, 00:20:27.701 "state": "enabled", 00:20:27.701 "thread": "nvmf_tgt_poll_group_000", 00:20:27.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:27.701 "listen_address": { 00:20:27.701 "trtype": "TCP", 00:20:27.701 "adrfam": "IPv4", 00:20:27.701 "traddr": "10.0.0.2", 00:20:27.701 "trsvcid": "4420" 00:20:27.701 }, 00:20:27.701 "peer_address": { 00:20:27.701 "trtype": "TCP", 00:20:27.701 "adrfam": "IPv4", 00:20:27.701 "traddr": "10.0.0.1", 00:20:27.701 "trsvcid": "47766" 00:20:27.701 }, 00:20:27.701 "auth": { 00:20:27.701 "state": "completed", 00:20:27.701 "digest": "sha384", 00:20:27.701 "dhgroup": "ffdhe2048" 00:20:27.701 } 00:20:27.701 } 00:20:27.701 ]' 00:20:27.701 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.006 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.006 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.006 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.006 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.006 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.006 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.006 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.006 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:28.006 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.951 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.213 00:20:29.213 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.213 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.213 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.475 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.475 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.475 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.475 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.475 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.475 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.476 { 00:20:29.476 "cntlid": 65, 00:20:29.476 "qid": 0, 00:20:29.476 "state": "enabled", 00:20:29.476 "thread": "nvmf_tgt_poll_group_000", 00:20:29.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:29.476 "listen_address": { 00:20:29.476 "trtype": "TCP", 00:20:29.476 "adrfam": "IPv4", 00:20:29.476 "traddr": "10.0.0.2", 00:20:29.476 "trsvcid": "4420" 00:20:29.476 }, 00:20:29.476 "peer_address": { 00:20:29.476 "trtype": "TCP", 00:20:29.476 "adrfam": "IPv4", 00:20:29.476 "traddr": "10.0.0.1", 00:20:29.476 "trsvcid": "47786" 00:20:29.476 }, 00:20:29.476 "auth": { 00:20:29.476 "state": "completed", 00:20:29.476 "digest": "sha384", 00:20:29.476 "dhgroup": "ffdhe3072" 00:20:29.476 } 00:20:29.476 } 00:20:29.476 ]' 00:20:29.476 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.476 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.476 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.476 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.476 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.476 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.476 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.476 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.737 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:29.737 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:30.309 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.309 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.309 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.309 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.309 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.309 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.309 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.309 13:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.570 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.832 00:20:30.832 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.832 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.832 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.094 { 00:20:31.094 "cntlid": 67, 00:20:31.094 "qid": 0, 00:20:31.094 "state": "enabled", 00:20:31.094 "thread": "nvmf_tgt_poll_group_000", 00:20:31.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:31.094 "listen_address": { 00:20:31.094 "trtype": "TCP", 00:20:31.094 "adrfam": "IPv4", 00:20:31.094 "traddr": "10.0.0.2", 00:20:31.094 "trsvcid": "4420" 00:20:31.094 }, 00:20:31.094 "peer_address": { 00:20:31.094 "trtype": "TCP", 00:20:31.094 "adrfam": "IPv4", 00:20:31.094 "traddr": "10.0.0.1", 00:20:31.094 "trsvcid": "47810" 00:20:31.094 }, 00:20:31.094 "auth": { 00:20:31.094 "state": "completed", 00:20:31.094 "digest": "sha384", 00:20:31.094 "dhgroup": "ffdhe3072" 00:20:31.094 } 00:20:31.094 } 00:20:31.094 ]' 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.094 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.355 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:31.355 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:31.927 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.927 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:31.927 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.927 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.927 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.927 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.927 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.927 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.189 13:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.449 00:20:32.449 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.449 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.449 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.708 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.708 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.708 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.709 { 00:20:32.709 "cntlid": 69, 00:20:32.709 "qid": 0, 00:20:32.709 "state": "enabled", 00:20:32.709 "thread": "nvmf_tgt_poll_group_000", 00:20:32.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:32.709 "listen_address": { 00:20:32.709 "trtype": "TCP", 00:20:32.709 "adrfam": "IPv4", 00:20:32.709 "traddr": "10.0.0.2", 00:20:32.709 "trsvcid": "4420" 00:20:32.709 }, 00:20:32.709 "peer_address": { 00:20:32.709 "trtype": "TCP", 00:20:32.709 "adrfam": "IPv4", 00:20:32.709 "traddr": "10.0.0.1", 00:20:32.709 "trsvcid": "53448" 00:20:32.709 }, 00:20:32.709 "auth": { 00:20:32.709 "state": "completed", 00:20:32.709 "digest": "sha384", 00:20:32.709 "dhgroup": "ffdhe3072" 00:20:32.709 } 00:20:32.709 } 00:20:32.709 ]' 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.709 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.968 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:32.968 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:33.536 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.537 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.537 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.537 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.537 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.537 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.537 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.537 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.796 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:33.796 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.796 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.796 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.797 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.797 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.797 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:33.797 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.797 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.797 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.797 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.797 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.797 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.057 00:20:34.057 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.057 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.057 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.318 { 00:20:34.318 "cntlid": 71, 00:20:34.318 "qid": 0, 00:20:34.318 "state": "enabled", 00:20:34.318 "thread": "nvmf_tgt_poll_group_000", 00:20:34.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:34.318 "listen_address": { 00:20:34.318 "trtype": "TCP", 00:20:34.318 "adrfam": "IPv4", 00:20:34.318 "traddr": "10.0.0.2", 00:20:34.318 "trsvcid": "4420" 00:20:34.318 }, 00:20:34.318 "peer_address": { 00:20:34.318 "trtype": "TCP", 00:20:34.318 "adrfam": "IPv4", 00:20:34.318 "traddr": "10.0.0.1", 00:20:34.318 "trsvcid": "53478" 00:20:34.318 }, 00:20:34.318 "auth": { 00:20:34.318 "state": "completed", 00:20:34.318 "digest": "sha384", 00:20:34.318 "dhgroup": "ffdhe3072" 00:20:34.318 } 00:20:34.318 } 00:20:34.318 ]' 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.318 13:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.579 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:34.579 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:35.149 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.411 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.411 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.411 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.411 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.411 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.411 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.411 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.411 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.411 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.671 00:20:35.671 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.671 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.671 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.930 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.930 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.930 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.930 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.930 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.930 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.930 { 00:20:35.930 "cntlid": 73, 00:20:35.930 "qid": 0, 00:20:35.930 "state": "enabled", 00:20:35.930 "thread": "nvmf_tgt_poll_group_000", 00:20:35.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:35.931 "listen_address": { 00:20:35.931 "trtype": "TCP", 00:20:35.931 "adrfam": "IPv4", 00:20:35.931 "traddr": "10.0.0.2", 00:20:35.931 "trsvcid": "4420" 00:20:35.931 }, 00:20:35.931 "peer_address": { 00:20:35.931 "trtype": "TCP", 00:20:35.931 "adrfam": "IPv4", 00:20:35.931 "traddr": "10.0.0.1", 00:20:35.931 "trsvcid": "53500" 00:20:35.931 }, 00:20:35.931 "auth": { 00:20:35.931 "state": "completed", 00:20:35.931 "digest": "sha384", 00:20:35.931 "dhgroup": "ffdhe4096" 00:20:35.931 } 00:20:35.931 } 00:20:35.931 ]' 00:20:35.931 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.931 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.931 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.931 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.931 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.931 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.931 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.191 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.191 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:36.191 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:36.762 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.023 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.283 00:20:37.283 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.283 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.283 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.543 { 00:20:37.543 "cntlid": 75, 00:20:37.543 "qid": 0, 00:20:37.543 "state": "enabled", 00:20:37.543 "thread": "nvmf_tgt_poll_group_000", 00:20:37.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:37.543 "listen_address": { 00:20:37.543 "trtype": "TCP", 00:20:37.543 "adrfam": "IPv4", 00:20:37.543 "traddr": "10.0.0.2", 00:20:37.543 "trsvcid": "4420" 00:20:37.543 }, 00:20:37.543 "peer_address": { 00:20:37.543 "trtype": "TCP", 00:20:37.543 "adrfam": "IPv4", 00:20:37.543 "traddr": "10.0.0.1", 00:20:37.543 "trsvcid": "53536" 00:20:37.543 }, 00:20:37.543 "auth": { 00:20:37.543 "state": "completed", 00:20:37.543 "digest": "sha384", 00:20:37.543 "dhgroup": "ffdhe4096" 00:20:37.543 } 00:20:37.543 } 00:20:37.543 ]' 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.543 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.804 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.804 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.804 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.804 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:37.804 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.746 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.006 00:20:39.006 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.007 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.007 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.267 { 00:20:39.267 "cntlid": 77, 00:20:39.267 "qid": 0, 00:20:39.267 "state": "enabled", 00:20:39.267 "thread": "nvmf_tgt_poll_group_000", 00:20:39.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:39.267 "listen_address": { 00:20:39.267 "trtype": "TCP", 00:20:39.267 "adrfam": "IPv4", 00:20:39.267 "traddr": "10.0.0.2", 00:20:39.267 "trsvcid": "4420" 00:20:39.267 }, 00:20:39.267 "peer_address": { 00:20:39.267 "trtype": "TCP", 00:20:39.267 "adrfam": "IPv4", 00:20:39.267 "traddr": "10.0.0.1", 00:20:39.267 "trsvcid": "53554" 00:20:39.267 }, 00:20:39.267 "auth": { 00:20:39.267 "state": "completed", 00:20:39.267 "digest": "sha384", 00:20:39.267 "dhgroup": "ffdhe4096" 00:20:39.267 } 00:20:39.267 } 00:20:39.267 ]' 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.267 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.529 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:39.529 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:40.100 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.100 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.100 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.100 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.100 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.100 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.100 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.100 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.361 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.622 00:20:40.622 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.622 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.622 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.884 { 00:20:40.884 "cntlid": 79, 00:20:40.884 "qid": 0, 00:20:40.884 "state": "enabled", 00:20:40.884 "thread": "nvmf_tgt_poll_group_000", 00:20:40.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:40.884 "listen_address": { 00:20:40.884 "trtype": "TCP", 00:20:40.884 "adrfam": "IPv4", 00:20:40.884 "traddr": "10.0.0.2", 00:20:40.884 "trsvcid": "4420" 00:20:40.884 }, 00:20:40.884 "peer_address": { 00:20:40.884 "trtype": "TCP", 00:20:40.884 "adrfam": "IPv4", 00:20:40.884 "traddr": "10.0.0.1", 00:20:40.884 "trsvcid": "53586" 00:20:40.884 }, 00:20:40.884 "auth": { 00:20:40.884 "state": "completed", 00:20:40.884 "digest": "sha384", 00:20:40.884 "dhgroup": "ffdhe4096" 00:20:40.884 } 00:20:40.884 } 00:20:40.884 ]' 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.884 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.145 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:41.145 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:41.715 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.716 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.716 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.716 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.716 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.716 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.716 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.716 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.716 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.976 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.977 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.977 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.238 00:20:42.238 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.238 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.238 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.498 { 00:20:42.498 "cntlid": 81, 00:20:42.498 "qid": 0, 00:20:42.498 "state": "enabled", 00:20:42.498 "thread": "nvmf_tgt_poll_group_000", 00:20:42.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:42.498 "listen_address": { 00:20:42.498 "trtype": "TCP", 00:20:42.498 "adrfam": "IPv4", 00:20:42.498 "traddr": "10.0.0.2", 00:20:42.498 "trsvcid": "4420" 00:20:42.498 }, 00:20:42.498 "peer_address": { 00:20:42.498 "trtype": "TCP", 00:20:42.498 "adrfam": "IPv4", 00:20:42.498 "traddr": "10.0.0.1", 00:20:42.498 "trsvcid": "35598" 00:20:42.498 }, 00:20:42.498 "auth": { 00:20:42.498 "state": "completed", 00:20:42.498 "digest": "sha384", 00:20:42.498 "dhgroup": "ffdhe6144" 00:20:42.498 } 00:20:42.498 } 00:20:42.498 ]' 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.498 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.758 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.758 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.758 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.758 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:42.758 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.698 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.958 00:20:43.958 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.958 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.958 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.219 { 00:20:44.219 "cntlid": 83, 00:20:44.219 "qid": 0, 00:20:44.219 "state": "enabled", 00:20:44.219 "thread": "nvmf_tgt_poll_group_000", 00:20:44.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:44.219 "listen_address": { 00:20:44.219 "trtype": "TCP", 00:20:44.219 "adrfam": "IPv4", 00:20:44.219 "traddr": "10.0.0.2", 00:20:44.219 "trsvcid": "4420" 00:20:44.219 }, 00:20:44.219 "peer_address": { 00:20:44.219 "trtype": "TCP", 00:20:44.219 "adrfam": "IPv4", 00:20:44.219 "traddr": "10.0.0.1", 00:20:44.219 "trsvcid": "35632" 00:20:44.219 }, 00:20:44.219 "auth": { 00:20:44.219 "state": "completed", 00:20:44.219 "digest": "sha384", 00:20:44.219 "dhgroup": "ffdhe6144" 00:20:44.219 } 00:20:44.219 } 00:20:44.219 ]' 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.219 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.479 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.479 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.479 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.479 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:44.479 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.420 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.680 00:20:45.680 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.680 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.680 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.940 { 00:20:45.940 "cntlid": 85, 00:20:45.940 "qid": 0, 00:20:45.940 "state": "enabled", 00:20:45.940 "thread": "nvmf_tgt_poll_group_000", 00:20:45.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:45.940 "listen_address": { 00:20:45.940 "trtype": "TCP", 00:20:45.940 "adrfam": "IPv4", 00:20:45.940 "traddr": "10.0.0.2", 00:20:45.940 "trsvcid": "4420" 00:20:45.940 }, 00:20:45.940 "peer_address": { 00:20:45.940 "trtype": "TCP", 00:20:45.940 "adrfam": "IPv4", 00:20:45.940 "traddr": "10.0.0.1", 00:20:45.940 "trsvcid": "35652" 00:20:45.940 }, 00:20:45.940 "auth": { 00:20:45.940 "state": "completed", 00:20:45.940 "digest": "sha384", 00:20:45.940 "dhgroup": "ffdhe6144" 00:20:45.940 } 00:20:45.940 } 00:20:45.940 ]' 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.940 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.201 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.201 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.201 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.201 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:46.201 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:47.141 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.142 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.142 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:47.142 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.142 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.142 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.142 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.142 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.142 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.402 00:20:47.402 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.402 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.402 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.662 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.662 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.662 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.662 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.663 { 00:20:47.663 "cntlid": 87, 00:20:47.663 "qid": 0, 00:20:47.663 "state": "enabled", 00:20:47.663 "thread": "nvmf_tgt_poll_group_000", 00:20:47.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:47.663 "listen_address": { 00:20:47.663 "trtype": "TCP", 00:20:47.663 "adrfam": "IPv4", 00:20:47.663 "traddr": "10.0.0.2", 00:20:47.663 "trsvcid": "4420" 00:20:47.663 }, 00:20:47.663 "peer_address": { 00:20:47.663 "trtype": "TCP", 00:20:47.663 "adrfam": "IPv4", 00:20:47.663 "traddr": "10.0.0.1", 00:20:47.663 "trsvcid": "35686" 00:20:47.663 }, 00:20:47.663 "auth": { 00:20:47.663 "state": "completed", 00:20:47.663 "digest": "sha384", 00:20:47.663 "dhgroup": "ffdhe6144" 00:20:47.663 } 00:20:47.663 } 00:20:47.663 ]' 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.663 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.923 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:47.923 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:48.493 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.493 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.493 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.493 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.493 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.493 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.493 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.493 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.493 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.754 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.325 00:20:49.325 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.325 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.325 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.325 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.325 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.325 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.325 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.325 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.585 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.585 { 00:20:49.585 "cntlid": 89, 00:20:49.585 "qid": 0, 00:20:49.585 "state": "enabled", 00:20:49.585 "thread": "nvmf_tgt_poll_group_000", 00:20:49.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:49.585 "listen_address": { 00:20:49.585 "trtype": "TCP", 00:20:49.585 "adrfam": "IPv4", 00:20:49.585 "traddr": "10.0.0.2", 00:20:49.585 "trsvcid": "4420" 00:20:49.585 }, 00:20:49.585 "peer_address": { 00:20:49.585 "trtype": "TCP", 00:20:49.585 "adrfam": "IPv4", 00:20:49.585 "traddr": "10.0.0.1", 00:20:49.585 "trsvcid": "35716" 00:20:49.585 }, 00:20:49.585 "auth": { 00:20:49.585 "state": "completed", 00:20:49.585 "digest": "sha384", 00:20:49.585 "dhgroup": "ffdhe8192" 00:20:49.585 } 00:20:49.585 } 00:20:49.585 ]' 00:20:49.585 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.585 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.585 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.586 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.586 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.586 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.586 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.586 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.846 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:49.846 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:50.416 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.416 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.416 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.416 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.416 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.416 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.416 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.416 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.676 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.246 00:20:51.246 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.246 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.246 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.246 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.246 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.246 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.246 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.246 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.246 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.246 { 00:20:51.246 "cntlid": 91, 00:20:51.246 "qid": 0, 00:20:51.246 "state": "enabled", 00:20:51.246 "thread": "nvmf_tgt_poll_group_000", 00:20:51.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:51.246 "listen_address": { 00:20:51.246 "trtype": "TCP", 00:20:51.246 "adrfam": "IPv4", 00:20:51.246 "traddr": "10.0.0.2", 00:20:51.246 "trsvcid": "4420" 00:20:51.246 }, 00:20:51.247 "peer_address": { 00:20:51.247 "trtype": "TCP", 00:20:51.247 "adrfam": "IPv4", 00:20:51.247 "traddr": "10.0.0.1", 00:20:51.247 "trsvcid": "35746" 00:20:51.247 }, 00:20:51.247 "auth": { 00:20:51.247 "state": "completed", 00:20:51.247 "digest": "sha384", 00:20:51.247 "dhgroup": "ffdhe8192" 00:20:51.247 } 00:20:51.247 } 00:20:51.247 ]' 00:20:51.247 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.247 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.247 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.508 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.508 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.508 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.508 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.508 13:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.508 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:51.508 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:52.451 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.451 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.451 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.451 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.451 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.451 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.451 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.451 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.451 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.021 00:20:53.021 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.021 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.021 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.283 { 00:20:53.283 "cntlid": 93, 00:20:53.283 "qid": 0, 00:20:53.283 "state": "enabled", 00:20:53.283 "thread": "nvmf_tgt_poll_group_000", 00:20:53.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:53.283 "listen_address": { 00:20:53.283 "trtype": "TCP", 00:20:53.283 "adrfam": "IPv4", 00:20:53.283 "traddr": "10.0.0.2", 00:20:53.283 "trsvcid": "4420" 00:20:53.283 }, 00:20:53.283 "peer_address": { 00:20:53.283 "trtype": "TCP", 00:20:53.283 "adrfam": "IPv4", 00:20:53.283 "traddr": "10.0.0.1", 00:20:53.283 "trsvcid": "34772" 00:20:53.283 }, 00:20:53.283 "auth": { 00:20:53.283 "state": "completed", 00:20:53.283 "digest": "sha384", 00:20:53.283 "dhgroup": "ffdhe8192" 00:20:53.283 } 00:20:53.283 } 00:20:53.283 ]' 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.283 13:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.544 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:53.544 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:20:54.116 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.116 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.116 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.116 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.116 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.116 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.116 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.116 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.377 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.947 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.947 { 00:20:54.947 "cntlid": 95, 00:20:54.947 "qid": 0, 00:20:54.947 "state": "enabled", 00:20:54.947 "thread": "nvmf_tgt_poll_group_000", 00:20:54.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:54.947 "listen_address": { 00:20:54.947 "trtype": "TCP", 00:20:54.947 "adrfam": "IPv4", 00:20:54.947 "traddr": "10.0.0.2", 00:20:54.947 "trsvcid": "4420" 00:20:54.947 }, 00:20:54.947 "peer_address": { 00:20:54.947 "trtype": "TCP", 00:20:54.947 "adrfam": "IPv4", 00:20:54.947 "traddr": "10.0.0.1", 00:20:54.947 "trsvcid": "34804" 00:20:54.947 }, 00:20:54.947 "auth": { 00:20:54.947 "state": "completed", 00:20:54.947 "digest": "sha384", 00:20:54.947 "dhgroup": "ffdhe8192" 00:20:54.947 } 00:20:54.947 } 00:20:54.947 ]' 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.947 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.208 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.208 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.208 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.208 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.208 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.468 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:55.468 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.039 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.299 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.299 00:20:56.560 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.560 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.560 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.560 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.560 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.560 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.560 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.560 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.560 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.560 { 00:20:56.560 "cntlid": 97, 00:20:56.560 "qid": 0, 00:20:56.560 "state": "enabled", 00:20:56.560 "thread": "nvmf_tgt_poll_group_000", 00:20:56.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:56.560 "listen_address": { 00:20:56.560 "trtype": "TCP", 00:20:56.560 "adrfam": "IPv4", 00:20:56.560 "traddr": "10.0.0.2", 00:20:56.560 "trsvcid": "4420" 00:20:56.560 }, 00:20:56.560 "peer_address": { 00:20:56.560 "trtype": "TCP", 00:20:56.560 "adrfam": "IPv4", 00:20:56.560 "traddr": "10.0.0.1", 00:20:56.560 "trsvcid": "34828" 00:20:56.560 }, 00:20:56.560 "auth": { 00:20:56.560 "state": "completed", 00:20:56.560 "digest": "sha512", 00:20:56.560 "dhgroup": "null" 00:20:56.560 } 00:20:56.560 } 00:20:56.560 ]' 00:20:56.560 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.560 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.560 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.819 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.819 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.819 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.819 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.819 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.819 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:56.819 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.760 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.761 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.761 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.020 00:20:58.020 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.020 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.020 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.279 { 00:20:58.279 "cntlid": 99, 00:20:58.279 "qid": 0, 00:20:58.279 "state": "enabled", 00:20:58.279 "thread": "nvmf_tgt_poll_group_000", 00:20:58.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:58.279 "listen_address": { 00:20:58.279 "trtype": "TCP", 00:20:58.279 "adrfam": "IPv4", 00:20:58.279 "traddr": "10.0.0.2", 00:20:58.279 "trsvcid": "4420" 00:20:58.279 }, 00:20:58.279 "peer_address": { 00:20:58.279 "trtype": "TCP", 00:20:58.279 "adrfam": "IPv4", 00:20:58.279 "traddr": "10.0.0.1", 00:20:58.279 "trsvcid": "34856" 00:20:58.279 }, 00:20:58.279 "auth": { 00:20:58.279 "state": "completed", 00:20:58.279 "digest": "sha512", 00:20:58.279 "dhgroup": "null" 00:20:58.279 } 00:20:58.279 } 00:20:58.279 ]' 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.279 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.553 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:58.553 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:20:59.129 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.129 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:59.129 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.129 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.129 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.129 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.129 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.129 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.390 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.651 00:20:59.651 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.651 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.651 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.911 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.911 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.911 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.911 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.911 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.911 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.911 { 00:20:59.911 "cntlid": 101, 00:20:59.911 "qid": 0, 00:20:59.911 "state": "enabled", 00:20:59.911 "thread": "nvmf_tgt_poll_group_000", 00:20:59.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:59.911 "listen_address": { 00:20:59.911 "trtype": "TCP", 00:20:59.911 "adrfam": "IPv4", 00:20:59.912 "traddr": "10.0.0.2", 00:20:59.912 "trsvcid": "4420" 00:20:59.912 }, 00:20:59.912 "peer_address": { 00:20:59.912 "trtype": "TCP", 00:20:59.912 "adrfam": "IPv4", 00:20:59.912 "traddr": "10.0.0.1", 00:20:59.912 "trsvcid": "34882" 00:20:59.912 }, 00:20:59.912 "auth": { 00:20:59.912 "state": "completed", 00:20:59.912 "digest": "sha512", 00:20:59.912 "dhgroup": "null" 00:20:59.912 } 00:20:59.912 } 00:20:59.912 ]' 00:20:59.912 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.912 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.912 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.912 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.912 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.912 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.912 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.912 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.173 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:00.173 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:00.748 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.748 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:00.748 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.748 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.748 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.748 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.748 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.748 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.009 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.269 00:21:01.269 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.269 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.269 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.530 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.530 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.530 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.530 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.530 { 00:21:01.530 "cntlid": 103, 00:21:01.530 "qid": 0, 00:21:01.530 "state": "enabled", 00:21:01.530 "thread": "nvmf_tgt_poll_group_000", 00:21:01.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:01.530 "listen_address": { 00:21:01.530 "trtype": "TCP", 00:21:01.530 "adrfam": "IPv4", 00:21:01.530 "traddr": "10.0.0.2", 00:21:01.530 "trsvcid": "4420" 00:21:01.530 }, 00:21:01.530 "peer_address": { 00:21:01.530 "trtype": "TCP", 00:21:01.530 "adrfam": "IPv4", 00:21:01.530 "traddr": "10.0.0.1", 00:21:01.530 "trsvcid": "34918" 00:21:01.530 }, 00:21:01.530 "auth": { 00:21:01.530 "state": "completed", 00:21:01.530 "digest": "sha512", 00:21:01.530 "dhgroup": "null" 00:21:01.530 } 00:21:01.530 } 00:21:01.530 ]' 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.530 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.791 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:01.791 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:02.363 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.363 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.363 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.363 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.363 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.363 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.363 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.363 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.363 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.625 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.626 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.887 00:21:02.887 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.887 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.887 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.148 { 00:21:03.148 "cntlid": 105, 00:21:03.148 "qid": 0, 00:21:03.148 "state": "enabled", 00:21:03.148 "thread": "nvmf_tgt_poll_group_000", 00:21:03.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:03.148 "listen_address": { 00:21:03.148 "trtype": "TCP", 00:21:03.148 "adrfam": "IPv4", 00:21:03.148 "traddr": "10.0.0.2", 00:21:03.148 "trsvcid": "4420" 00:21:03.148 }, 00:21:03.148 "peer_address": { 00:21:03.148 "trtype": "TCP", 00:21:03.148 "adrfam": "IPv4", 00:21:03.148 "traddr": "10.0.0.1", 00:21:03.148 "trsvcid": "55142" 00:21:03.148 }, 00:21:03.148 "auth": { 00:21:03.148 "state": "completed", 00:21:03.148 "digest": "sha512", 00:21:03.148 "dhgroup": "ffdhe2048" 00:21:03.148 } 00:21:03.148 } 00:21:03.148 ]' 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.148 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.410 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:03.410 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:03.983 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.983 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.983 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.983 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.983 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.983 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.983 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.983 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.245 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.506 00:21:04.506 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.506 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.506 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.506 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.506 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.506 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.506 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.767 { 00:21:04.767 "cntlid": 107, 00:21:04.767 "qid": 0, 00:21:04.767 "state": "enabled", 00:21:04.767 "thread": "nvmf_tgt_poll_group_000", 00:21:04.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:04.767 "listen_address": { 00:21:04.767 "trtype": "TCP", 00:21:04.767 "adrfam": "IPv4", 00:21:04.767 "traddr": "10.0.0.2", 00:21:04.767 "trsvcid": "4420" 00:21:04.767 }, 00:21:04.767 "peer_address": { 00:21:04.767 "trtype": "TCP", 00:21:04.767 "adrfam": "IPv4", 00:21:04.767 "traddr": "10.0.0.1", 00:21:04.767 "trsvcid": "55152" 00:21:04.767 }, 00:21:04.767 "auth": { 00:21:04.767 "state": "completed", 00:21:04.767 "digest": "sha512", 00:21:04.767 "dhgroup": "ffdhe2048" 00:21:04.767 } 00:21:04.767 } 00:21:04.767 ]' 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.767 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.028 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:05.028 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:05.598 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.598 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.598 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.598 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.598 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.598 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.598 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.598 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.859 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.119 00:21:06.119 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.119 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.119 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.119 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.119 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.120 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.120 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.120 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.120 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.120 { 00:21:06.120 "cntlid": 109, 00:21:06.120 "qid": 0, 00:21:06.120 "state": "enabled", 00:21:06.120 "thread": "nvmf_tgt_poll_group_000", 00:21:06.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:06.120 "listen_address": { 00:21:06.120 "trtype": "TCP", 00:21:06.120 "adrfam": "IPv4", 00:21:06.120 "traddr": "10.0.0.2", 00:21:06.120 "trsvcid": "4420" 00:21:06.120 }, 00:21:06.120 "peer_address": { 00:21:06.120 "trtype": "TCP", 00:21:06.120 "adrfam": "IPv4", 00:21:06.120 "traddr": "10.0.0.1", 00:21:06.120 "trsvcid": "55186" 00:21:06.120 }, 00:21:06.120 "auth": { 00:21:06.120 "state": "completed", 00:21:06.120 "digest": "sha512", 00:21:06.120 "dhgroup": "ffdhe2048" 00:21:06.120 } 00:21:06.120 } 00:21:06.120 ]' 00:21:06.120 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.433 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.433 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.433 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.433 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.433 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.433 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.433 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.761 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:06.761 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:07.346 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.346 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.346 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.346 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.346 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.347 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.607 00:21:07.607 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.607 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.607 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.867 { 00:21:07.867 "cntlid": 111, 00:21:07.867 "qid": 0, 00:21:07.867 "state": "enabled", 00:21:07.867 "thread": "nvmf_tgt_poll_group_000", 00:21:07.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:07.867 "listen_address": { 00:21:07.867 "trtype": "TCP", 00:21:07.867 "adrfam": "IPv4", 00:21:07.867 "traddr": "10.0.0.2", 00:21:07.867 "trsvcid": "4420" 00:21:07.867 }, 00:21:07.867 "peer_address": { 00:21:07.867 "trtype": "TCP", 00:21:07.867 "adrfam": "IPv4", 00:21:07.867 "traddr": "10.0.0.1", 00:21:07.867 "trsvcid": "55214" 00:21:07.867 }, 00:21:07.867 "auth": { 00:21:07.867 "state": "completed", 00:21:07.867 "digest": "sha512", 00:21:07.867 "dhgroup": "ffdhe2048" 00:21:07.867 } 00:21:07.867 } 00:21:07.867 ]' 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.867 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.127 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:08.127 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:08.697 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.697 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.697 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.697 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.697 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.697 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.697 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.697 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.697 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.957 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.217 00:21:09.217 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.217 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.217 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.477 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.477 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.477 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.477 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.477 { 00:21:09.477 "cntlid": 113, 00:21:09.477 "qid": 0, 00:21:09.477 "state": "enabled", 00:21:09.477 "thread": "nvmf_tgt_poll_group_000", 00:21:09.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:09.477 "listen_address": { 00:21:09.477 "trtype": "TCP", 00:21:09.477 "adrfam": "IPv4", 00:21:09.477 "traddr": "10.0.0.2", 00:21:09.477 "trsvcid": "4420" 00:21:09.477 }, 00:21:09.477 "peer_address": { 00:21:09.477 "trtype": "TCP", 00:21:09.477 "adrfam": "IPv4", 00:21:09.477 "traddr": "10.0.0.1", 00:21:09.477 "trsvcid": "55256" 00:21:09.477 }, 00:21:09.477 "auth": { 00:21:09.477 "state": "completed", 00:21:09.477 "digest": "sha512", 00:21:09.477 "dhgroup": "ffdhe3072" 00:21:09.477 } 00:21:09.477 } 00:21:09.477 ]' 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.477 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.738 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:09.738 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:10.308 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.568 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.568 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.568 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.568 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.568 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.568 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.568 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.828 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.089 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.089 { 00:21:11.089 "cntlid": 115, 00:21:11.089 "qid": 0, 00:21:11.089 "state": "enabled", 00:21:11.089 "thread": "nvmf_tgt_poll_group_000", 00:21:11.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:11.089 "listen_address": { 00:21:11.089 "trtype": "TCP", 00:21:11.089 "adrfam": "IPv4", 00:21:11.089 "traddr": "10.0.0.2", 00:21:11.089 "trsvcid": "4420" 00:21:11.089 }, 00:21:11.089 "peer_address": { 00:21:11.089 "trtype": "TCP", 00:21:11.089 "adrfam": "IPv4", 00:21:11.089 "traddr": "10.0.0.1", 00:21:11.089 "trsvcid": "55286" 00:21:11.089 }, 00:21:11.089 "auth": { 00:21:11.089 "state": "completed", 00:21:11.089 "digest": "sha512", 00:21:11.089 "dhgroup": "ffdhe3072" 00:21:11.089 } 00:21:11.089 } 00:21:11.089 ]' 00:21:11.089 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.349 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.349 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.349 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.349 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.349 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.349 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.349 13:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.609 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:11.609 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:12.180 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.180 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:12.180 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.180 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.180 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.180 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.180 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.180 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.441 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.701 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.701 { 00:21:12.701 "cntlid": 117, 00:21:12.701 "qid": 0, 00:21:12.701 "state": "enabled", 00:21:12.701 "thread": "nvmf_tgt_poll_group_000", 00:21:12.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:12.701 "listen_address": { 00:21:12.701 "trtype": "TCP", 00:21:12.701 "adrfam": "IPv4", 00:21:12.701 "traddr": "10.0.0.2", 00:21:12.701 "trsvcid": "4420" 00:21:12.701 }, 00:21:12.701 "peer_address": { 00:21:12.701 "trtype": "TCP", 00:21:12.701 "adrfam": "IPv4", 00:21:12.701 "traddr": "10.0.0.1", 00:21:12.701 "trsvcid": "52406" 00:21:12.701 }, 00:21:12.701 "auth": { 00:21:12.701 "state": "completed", 00:21:12.701 "digest": "sha512", 00:21:12.701 "dhgroup": "ffdhe3072" 00:21:12.701 } 00:21:12.701 } 00:21:12.701 ]' 00:21:12.701 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.961 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.961 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.961 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.961 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.961 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.961 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.961 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.222 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:13.222 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:13.793 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.793 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.793 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.793 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.793 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.793 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.793 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.053 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.054 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.313 00:21:14.313 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.314 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.314 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.314 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.314 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.314 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.314 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.314 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.314 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.314 { 00:21:14.314 "cntlid": 119, 00:21:14.314 "qid": 0, 00:21:14.314 "state": "enabled", 00:21:14.314 "thread": "nvmf_tgt_poll_group_000", 00:21:14.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:14.314 "listen_address": { 00:21:14.314 "trtype": "TCP", 00:21:14.314 "adrfam": "IPv4", 00:21:14.314 "traddr": "10.0.0.2", 00:21:14.314 "trsvcid": "4420" 00:21:14.314 }, 00:21:14.314 "peer_address": { 00:21:14.314 "trtype": "TCP", 00:21:14.314 "adrfam": "IPv4", 00:21:14.314 "traddr": "10.0.0.1", 00:21:14.314 "trsvcid": "52436" 00:21:14.314 }, 00:21:14.314 "auth": { 00:21:14.314 "state": "completed", 00:21:14.314 "digest": "sha512", 00:21:14.314 "dhgroup": "ffdhe3072" 00:21:14.314 } 00:21:14.314 } 00:21:14.314 ]' 00:21:14.314 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.574 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.574 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.574 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.574 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.574 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.574 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.574 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.834 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:14.834 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:15.409 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.409 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.409 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.409 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.409 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.409 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.409 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.409 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.409 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.669 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.669 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.929 { 00:21:15.929 "cntlid": 121, 00:21:15.929 "qid": 0, 00:21:15.929 "state": "enabled", 00:21:15.929 "thread": "nvmf_tgt_poll_group_000", 00:21:15.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:15.929 "listen_address": { 00:21:15.929 "trtype": "TCP", 00:21:15.929 "adrfam": "IPv4", 00:21:15.929 "traddr": "10.0.0.2", 00:21:15.929 "trsvcid": "4420" 00:21:15.929 }, 00:21:15.929 "peer_address": { 00:21:15.929 "trtype": "TCP", 00:21:15.929 "adrfam": "IPv4", 00:21:15.929 "traddr": "10.0.0.1", 00:21:15.929 "trsvcid": "52464" 00:21:15.929 }, 00:21:15.929 "auth": { 00:21:15.929 "state": "completed", 00:21:15.929 "digest": "sha512", 00:21:15.929 "dhgroup": "ffdhe4096" 00:21:15.929 } 00:21:15.929 } 00:21:15.929 ]' 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.929 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.189 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.189 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.189 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.189 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.189 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.189 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.450 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:16.450 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:17.020 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.021 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.021 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.021 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.021 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.021 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.021 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.021 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.281 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:17.281 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.281 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.281 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.281 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.281 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.281 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.282 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.282 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.282 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.282 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.282 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.282 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.543 00:21:17.543 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.543 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.543 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.543 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.543 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.543 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.543 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.803 { 00:21:17.803 "cntlid": 123, 00:21:17.803 "qid": 0, 00:21:17.803 "state": "enabled", 00:21:17.803 "thread": "nvmf_tgt_poll_group_000", 00:21:17.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:17.803 "listen_address": { 00:21:17.803 "trtype": "TCP", 00:21:17.803 "adrfam": "IPv4", 00:21:17.803 "traddr": "10.0.0.2", 00:21:17.803 "trsvcid": "4420" 00:21:17.803 }, 00:21:17.803 "peer_address": { 00:21:17.803 "trtype": "TCP", 00:21:17.803 "adrfam": "IPv4", 00:21:17.803 "traddr": "10.0.0.1", 00:21:17.803 "trsvcid": "52484" 00:21:17.803 }, 00:21:17.803 "auth": { 00:21:17.803 "state": "completed", 00:21:17.803 "digest": "sha512", 00:21:17.803 "dhgroup": "ffdhe4096" 00:21:17.803 } 00:21:17.803 } 00:21:17.803 ]' 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.803 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.064 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:18.064 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:18.635 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.635 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:18.635 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.635 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.635 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.635 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.635 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.635 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.897 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.157 00:21:19.157 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.157 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.157 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.418 { 00:21:19.418 "cntlid": 125, 00:21:19.418 "qid": 0, 00:21:19.418 "state": "enabled", 00:21:19.418 "thread": "nvmf_tgt_poll_group_000", 00:21:19.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:19.418 "listen_address": { 00:21:19.418 "trtype": "TCP", 00:21:19.418 "adrfam": "IPv4", 00:21:19.418 "traddr": "10.0.0.2", 00:21:19.418 "trsvcid": "4420" 00:21:19.418 }, 00:21:19.418 "peer_address": { 00:21:19.418 "trtype": "TCP", 00:21:19.418 "adrfam": "IPv4", 00:21:19.418 "traddr": "10.0.0.1", 00:21:19.418 "trsvcid": "52502" 00:21:19.418 }, 00:21:19.418 "auth": { 00:21:19.418 "state": "completed", 00:21:19.418 "digest": "sha512", 00:21:19.418 "dhgroup": "ffdhe4096" 00:21:19.418 } 00:21:19.418 } 00:21:19.418 ]' 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.418 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.418 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.418 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.418 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.678 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:19.678 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:20.249 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.249 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.249 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.249 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.249 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.249 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.249 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.249 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.509 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.769 00:21:20.769 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.769 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.769 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.030 { 00:21:21.030 "cntlid": 127, 00:21:21.030 "qid": 0, 00:21:21.030 "state": "enabled", 00:21:21.030 "thread": "nvmf_tgt_poll_group_000", 00:21:21.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:21.030 "listen_address": { 00:21:21.030 "trtype": "TCP", 00:21:21.030 "adrfam": "IPv4", 00:21:21.030 "traddr": "10.0.0.2", 00:21:21.030 "trsvcid": "4420" 00:21:21.030 }, 00:21:21.030 "peer_address": { 00:21:21.030 "trtype": "TCP", 00:21:21.030 "adrfam": "IPv4", 00:21:21.030 "traddr": "10.0.0.1", 00:21:21.030 "trsvcid": "52526" 00:21:21.030 }, 00:21:21.030 "auth": { 00:21:21.030 "state": "completed", 00:21:21.030 "digest": "sha512", 00:21:21.030 "dhgroup": "ffdhe4096" 00:21:21.030 } 00:21:21.030 } 00:21:21.030 ]' 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.030 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.291 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:21.291 13:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:21.862 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.862 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.862 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.862 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.862 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.862 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.862 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.862 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:21.862 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.123 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:22.123 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.123 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.123 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.123 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.123 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.124 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.124 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.124 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.124 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.124 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.124 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.124 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.384 00:21:22.384 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.384 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.384 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.643 { 00:21:22.643 "cntlid": 129, 00:21:22.643 "qid": 0, 00:21:22.643 "state": "enabled", 00:21:22.643 "thread": "nvmf_tgt_poll_group_000", 00:21:22.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:22.643 "listen_address": { 00:21:22.643 "trtype": "TCP", 00:21:22.643 "adrfam": "IPv4", 00:21:22.643 "traddr": "10.0.0.2", 00:21:22.643 "trsvcid": "4420" 00:21:22.643 }, 00:21:22.643 "peer_address": { 00:21:22.643 "trtype": "TCP", 00:21:22.643 "adrfam": "IPv4", 00:21:22.643 "traddr": "10.0.0.1", 00:21:22.643 "trsvcid": "44152" 00:21:22.643 }, 00:21:22.643 "auth": { 00:21:22.643 "state": "completed", 00:21:22.643 "digest": "sha512", 00:21:22.643 "dhgroup": "ffdhe6144" 00:21:22.643 } 00:21:22.643 } 00:21:22.643 ]' 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.643 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.644 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.644 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.644 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.644 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.903 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:22.903 13:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:23.475 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.475 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:23.475 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.475 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.475 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.475 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.475 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.475 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.736 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.996 00:21:23.996 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.996 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.996 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.257 { 00:21:24.257 "cntlid": 131, 00:21:24.257 "qid": 0, 00:21:24.257 "state": "enabled", 00:21:24.257 "thread": "nvmf_tgt_poll_group_000", 00:21:24.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:24.257 "listen_address": { 00:21:24.257 "trtype": "TCP", 00:21:24.257 "adrfam": "IPv4", 00:21:24.257 "traddr": "10.0.0.2", 00:21:24.257 "trsvcid": "4420" 00:21:24.257 }, 00:21:24.257 "peer_address": { 00:21:24.257 "trtype": "TCP", 00:21:24.257 "adrfam": "IPv4", 00:21:24.257 "traddr": "10.0.0.1", 00:21:24.257 "trsvcid": "44180" 00:21:24.257 }, 00:21:24.257 "auth": { 00:21:24.257 "state": "completed", 00:21:24.257 "digest": "sha512", 00:21:24.257 "dhgroup": "ffdhe6144" 00:21:24.257 } 00:21:24.257 } 00:21:24.257 ]' 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.257 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.518 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.518 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.518 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.518 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:24.518 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:25.461 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.461 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:25.461 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.461 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.461 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.461 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.461 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.461 13:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.461 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.722 00:21:25.722 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.722 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.722 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.983 { 00:21:25.983 "cntlid": 133, 00:21:25.983 "qid": 0, 00:21:25.983 "state": "enabled", 00:21:25.983 "thread": "nvmf_tgt_poll_group_000", 00:21:25.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:25.983 "listen_address": { 00:21:25.983 "trtype": "TCP", 00:21:25.983 "adrfam": "IPv4", 00:21:25.983 "traddr": "10.0.0.2", 00:21:25.983 "trsvcid": "4420" 00:21:25.983 }, 00:21:25.983 "peer_address": { 00:21:25.983 "trtype": "TCP", 00:21:25.983 "adrfam": "IPv4", 00:21:25.983 "traddr": "10.0.0.1", 00:21:25.983 "trsvcid": "44202" 00:21:25.983 }, 00:21:25.983 "auth": { 00:21:25.983 "state": "completed", 00:21:25.983 "digest": "sha512", 00:21:25.983 "dhgroup": "ffdhe6144" 00:21:25.983 } 00:21:25.983 } 00:21:25.983 ]' 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.983 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.244 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.244 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.244 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.244 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.244 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:26.244 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.186 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.448 00:21:27.448 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.448 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.448 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.709 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.709 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.709 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.709 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.709 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.709 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.709 { 00:21:27.709 "cntlid": 135, 00:21:27.709 "qid": 0, 00:21:27.709 "state": "enabled", 00:21:27.709 "thread": "nvmf_tgt_poll_group_000", 00:21:27.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:27.709 "listen_address": { 00:21:27.709 "trtype": "TCP", 00:21:27.709 "adrfam": "IPv4", 00:21:27.709 "traddr": "10.0.0.2", 00:21:27.709 "trsvcid": "4420" 00:21:27.709 }, 00:21:27.709 "peer_address": { 00:21:27.709 "trtype": "TCP", 00:21:27.709 "adrfam": "IPv4", 00:21:27.709 "traddr": "10.0.0.1", 00:21:27.709 "trsvcid": "44226" 00:21:27.709 }, 00:21:27.709 "auth": { 00:21:27.709 "state": "completed", 00:21:27.709 "digest": "sha512", 00:21:27.709 "dhgroup": "ffdhe6144" 00:21:27.709 } 00:21:27.709 } 00:21:27.709 ]' 00:21:27.709 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.709 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.709 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.971 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.971 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.971 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.971 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.971 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.971 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:27.971 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.915 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.487 00:21:29.487 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.487 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.487 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.487 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.487 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.487 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.487 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.487 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.487 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.487 { 00:21:29.487 "cntlid": 137, 00:21:29.487 "qid": 0, 00:21:29.487 "state": "enabled", 00:21:29.487 "thread": "nvmf_tgt_poll_group_000", 00:21:29.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:29.487 "listen_address": { 00:21:29.487 "trtype": "TCP", 00:21:29.487 "adrfam": "IPv4", 00:21:29.487 "traddr": "10.0.0.2", 00:21:29.487 "trsvcid": "4420" 00:21:29.487 }, 00:21:29.487 "peer_address": { 00:21:29.487 "trtype": "TCP", 00:21:29.487 "adrfam": "IPv4", 00:21:29.487 "traddr": "10.0.0.1", 00:21:29.487 "trsvcid": "44244" 00:21:29.487 }, 00:21:29.487 "auth": { 00:21:29.487 "state": "completed", 00:21:29.487 "digest": "sha512", 00:21:29.487 "dhgroup": "ffdhe8192" 00:21:29.487 } 00:21:29.487 } 00:21:29.487 ]' 00:21:29.487 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.487 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.487 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.748 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.749 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.749 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.749 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.749 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.009 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:30.009 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.580 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.151 00:21:31.151 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.151 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.151 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.413 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.413 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.413 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.413 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.413 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.413 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.413 { 00:21:31.413 "cntlid": 139, 00:21:31.413 "qid": 0, 00:21:31.413 "state": "enabled", 00:21:31.413 "thread": "nvmf_tgt_poll_group_000", 00:21:31.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:31.413 "listen_address": { 00:21:31.413 "trtype": "TCP", 00:21:31.413 "adrfam": "IPv4", 00:21:31.413 "traddr": "10.0.0.2", 00:21:31.413 "trsvcid": "4420" 00:21:31.413 }, 00:21:31.413 "peer_address": { 00:21:31.413 "trtype": "TCP", 00:21:31.413 "adrfam": "IPv4", 00:21:31.413 "traddr": "10.0.0.1", 00:21:31.413 "trsvcid": "44270" 00:21:31.413 }, 00:21:31.413 "auth": { 00:21:31.413 "state": "completed", 00:21:31.413 "digest": "sha512", 00:21:31.413 "dhgroup": "ffdhe8192" 00:21:31.413 } 00:21:31.413 } 00:21:31.413 ]' 00:21:31.413 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.413 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.413 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.413 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.413 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.413 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.413 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.413 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.675 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:31.675 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: --dhchap-ctrl-secret DHHC-1:02:NTJlNGJiNDBlM2YxMTc3MDJhN2E2MmI4MzJmYmQ4MmRmYmUxMDUzZmZmY2M0ZDc3fFs1Kw==: 00:21:32.246 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.246 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.246 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.246 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.246 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.246 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.246 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.246 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.513 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.085 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.085 { 00:21:33.085 "cntlid": 141, 00:21:33.085 "qid": 0, 00:21:33.085 "state": "enabled", 00:21:33.085 "thread": "nvmf_tgt_poll_group_000", 00:21:33.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:33.085 "listen_address": { 00:21:33.085 "trtype": "TCP", 00:21:33.085 "adrfam": "IPv4", 00:21:33.085 "traddr": "10.0.0.2", 00:21:33.085 "trsvcid": "4420" 00:21:33.085 }, 00:21:33.085 "peer_address": { 00:21:33.085 "trtype": "TCP", 00:21:33.085 "adrfam": "IPv4", 00:21:33.085 "traddr": "10.0.0.1", 00:21:33.085 "trsvcid": "49072" 00:21:33.085 }, 00:21:33.085 "auth": { 00:21:33.085 "state": "completed", 00:21:33.085 "digest": "sha512", 00:21:33.085 "dhgroup": "ffdhe8192" 00:21:33.085 } 00:21:33.085 } 00:21:33.085 ]' 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.085 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.346 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.346 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.346 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.346 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.346 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.607 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:33.607 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:01:MmEwZGFjNjllMzE5MDE1MTIzMTVmZDkzNjZlMmViNDQqoXQ7: 00:21:34.179 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.179 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.179 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.179 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.179 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.179 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.179 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.179 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.440 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:34.440 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.440 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.440 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.440 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.440 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.441 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:34.441 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.441 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.441 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.441 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.441 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.441 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.700 00:21:34.700 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.700 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.700 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.961 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.961 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.961 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.961 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.961 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.961 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.961 { 00:21:34.961 "cntlid": 143, 00:21:34.961 "qid": 0, 00:21:34.961 "state": "enabled", 00:21:34.961 "thread": "nvmf_tgt_poll_group_000", 00:21:34.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:34.961 "listen_address": { 00:21:34.961 "trtype": "TCP", 00:21:34.961 "adrfam": "IPv4", 00:21:34.961 "traddr": "10.0.0.2", 00:21:34.961 "trsvcid": "4420" 00:21:34.961 }, 00:21:34.961 "peer_address": { 00:21:34.961 "trtype": "TCP", 00:21:34.961 "adrfam": "IPv4", 00:21:34.961 "traddr": "10.0.0.1", 00:21:34.961 "trsvcid": "49104" 00:21:34.961 }, 00:21:34.961 "auth": { 00:21:34.961 "state": "completed", 00:21:34.961 "digest": "sha512", 00:21:34.961 "dhgroup": "ffdhe8192" 00:21:34.961 } 00:21:34.961 } 00:21:34.961 ]' 00:21:34.961 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.961 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.961 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.222 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.222 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.222 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.222 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.222 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.222 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:35.222 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.164 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.736 00:21:36.736 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.736 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.736 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.996 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.996 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.996 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.996 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.996 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.996 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.996 { 00:21:36.996 "cntlid": 145, 00:21:36.996 "qid": 0, 00:21:36.996 "state": "enabled", 00:21:36.996 "thread": "nvmf_tgt_poll_group_000", 00:21:36.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:36.997 "listen_address": { 00:21:36.997 "trtype": "TCP", 00:21:36.997 "adrfam": "IPv4", 00:21:36.997 "traddr": "10.0.0.2", 00:21:36.997 "trsvcid": "4420" 00:21:36.997 }, 00:21:36.997 "peer_address": { 00:21:36.997 "trtype": "TCP", 00:21:36.997 "adrfam": "IPv4", 00:21:36.997 "traddr": "10.0.0.1", 00:21:36.997 "trsvcid": "49126" 00:21:36.997 }, 00:21:36.997 "auth": { 00:21:36.997 "state": "completed", 00:21:36.997 "digest": "sha512", 00:21:36.997 "dhgroup": "ffdhe8192" 00:21:36.997 } 00:21:36.997 } 00:21:36.997 ]' 00:21:36.997 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.997 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.997 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.997 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.997 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.997 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.997 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.997 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.258 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:37.258 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2MzNGI5MjRiNWIzMjhjZjQ1ZGE5ZGNhMmQxYTcyYmZlMzU2NWQzNTkxOTI2NjIxiQlkPA==: --dhchap-ctrl-secret DHHC-1:03:ZjZjNjUxN2E0YmEwNzA4NWM5MjViYTcyYzFmNTk1MDYyNTJmMGQ3NzlhODNlZGFkNDFhNmFhZGNjOGU3OGY2YSy95sE=: 00:21:37.834 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.834 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.834 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.834 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:37.835 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:38.407 request: 00:21:38.407 { 00:21:38.407 "name": "nvme0", 00:21:38.407 "trtype": "tcp", 00:21:38.407 "traddr": "10.0.0.2", 00:21:38.407 "adrfam": "ipv4", 00:21:38.407 "trsvcid": "4420", 00:21:38.407 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:38.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.407 "prchk_reftag": false, 00:21:38.407 "prchk_guard": false, 00:21:38.407 "hdgst": false, 00:21:38.407 "ddgst": false, 00:21:38.407 "dhchap_key": "key2", 00:21:38.407 "allow_unrecognized_csi": false, 00:21:38.407 "method": "bdev_nvme_attach_controller", 00:21:38.407 "req_id": 1 00:21:38.407 } 00:21:38.407 Got JSON-RPC error response 00:21:38.407 response: 00:21:38.407 { 00:21:38.407 "code": -5, 00:21:38.407 "message": "Input/output error" 00:21:38.407 } 00:21:38.407 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:38.407 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.407 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.407 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.407 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.408 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.979 request: 00:21:38.979 { 00:21:38.979 "name": "nvme0", 00:21:38.979 "trtype": "tcp", 00:21:38.979 "traddr": "10.0.0.2", 00:21:38.979 "adrfam": "ipv4", 00:21:38.979 "trsvcid": "4420", 00:21:38.979 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:38.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.979 "prchk_reftag": false, 00:21:38.979 "prchk_guard": false, 00:21:38.979 "hdgst": false, 00:21:38.979 "ddgst": false, 00:21:38.979 "dhchap_key": "key1", 00:21:38.979 "dhchap_ctrlr_key": "ckey2", 00:21:38.979 "allow_unrecognized_csi": false, 00:21:38.979 "method": "bdev_nvme_attach_controller", 00:21:38.979 "req_id": 1 00:21:38.979 } 00:21:38.979 Got JSON-RPC error response 00:21:38.979 response: 00:21:38.979 { 00:21:38.979 "code": -5, 00:21:38.979 "message": "Input/output error" 00:21:38.979 } 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.979 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.240 request: 00:21:39.240 { 00:21:39.240 "name": "nvme0", 00:21:39.240 "trtype": "tcp", 00:21:39.240 "traddr": "10.0.0.2", 00:21:39.240 "adrfam": "ipv4", 00:21:39.240 "trsvcid": "4420", 00:21:39.240 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:39.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:39.240 "prchk_reftag": false, 00:21:39.240 "prchk_guard": false, 00:21:39.240 "hdgst": false, 00:21:39.240 "ddgst": false, 00:21:39.240 "dhchap_key": "key1", 00:21:39.240 "dhchap_ctrlr_key": "ckey1", 00:21:39.240 "allow_unrecognized_csi": false, 00:21:39.240 "method": "bdev_nvme_attach_controller", 00:21:39.240 "req_id": 1 00:21:39.240 } 00:21:39.240 Got JSON-RPC error response 00:21:39.240 response: 00:21:39.240 { 00:21:39.240 "code": -5, 00:21:39.240 "message": "Input/output error" 00:21:39.240 } 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 896466 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 896466 ']' 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 896466 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.240 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 896466 00:21:39.500 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:39.500 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:39.500 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 896466' 00:21:39.500 killing process with pid 896466 00:21:39.500 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 896466 00:21:39.500 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 896466 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=922425 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 922425 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 922425 ']' 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.500 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.441 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.441 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 922425 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 922425 ']' 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.442 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.442 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.442 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:40.442 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:40.442 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.702 null0 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KHY 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.tRB ]] 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tRB 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nFX 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.cib ]] 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cib 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AnF 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.awm ]] 00:21:40.702 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.awm 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Zjd 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.703 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.644 nvme0n1 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.644 { 00:21:41.644 "cntlid": 1, 00:21:41.644 "qid": 0, 00:21:41.644 "state": "enabled", 00:21:41.644 "thread": "nvmf_tgt_poll_group_000", 00:21:41.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.644 "listen_address": { 00:21:41.644 "trtype": "TCP", 00:21:41.644 "adrfam": "IPv4", 00:21:41.644 "traddr": "10.0.0.2", 00:21:41.644 "trsvcid": "4420" 00:21:41.644 }, 00:21:41.644 "peer_address": { 00:21:41.644 "trtype": "TCP", 00:21:41.644 "adrfam": "IPv4", 00:21:41.644 "traddr": "10.0.0.1", 00:21:41.644 "trsvcid": "49178" 00:21:41.644 }, 00:21:41.644 "auth": { 00:21:41.644 "state": "completed", 00:21:41.644 "digest": "sha512", 00:21:41.644 "dhgroup": "ffdhe8192" 00:21:41.644 } 00:21:41.644 } 00:21:41.644 ]' 00:21:41.644 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.905 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.905 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.905 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.905 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.905 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.905 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.905 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.166 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:42.166 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:42.736 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.997 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.997 request: 00:21:42.997 { 00:21:42.997 "name": "nvme0", 00:21:42.997 "trtype": "tcp", 00:21:42.997 "traddr": "10.0.0.2", 00:21:42.997 "adrfam": "ipv4", 00:21:42.997 "trsvcid": "4420", 00:21:42.997 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:42.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:42.997 "prchk_reftag": false, 00:21:42.997 "prchk_guard": false, 00:21:42.997 "hdgst": false, 00:21:42.997 "ddgst": false, 00:21:42.997 "dhchap_key": "key3", 00:21:42.997 "allow_unrecognized_csi": false, 00:21:42.997 "method": "bdev_nvme_attach_controller", 00:21:42.997 "req_id": 1 00:21:42.997 } 00:21:42.997 Got JSON-RPC error response 00:21:42.997 response: 00:21:42.997 { 00:21:42.997 "code": -5, 00:21:42.997 "message": "Input/output error" 00:21:42.997 } 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.258 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.518 request: 00:21:43.518 { 00:21:43.518 "name": "nvme0", 00:21:43.518 "trtype": "tcp", 00:21:43.518 "traddr": "10.0.0.2", 00:21:43.518 "adrfam": "ipv4", 00:21:43.518 "trsvcid": "4420", 00:21:43.518 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:43.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:43.518 "prchk_reftag": false, 00:21:43.518 "prchk_guard": false, 00:21:43.518 "hdgst": false, 00:21:43.518 "ddgst": false, 00:21:43.518 "dhchap_key": "key3", 00:21:43.518 "allow_unrecognized_csi": false, 00:21:43.518 "method": "bdev_nvme_attach_controller", 00:21:43.518 "req_id": 1 00:21:43.518 } 00:21:43.518 Got JSON-RPC error response 00:21:43.518 response: 00:21:43.518 { 00:21:43.518 "code": -5, 00:21:43.518 "message": "Input/output error" 00:21:43.518 } 00:21:43.518 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:43.518 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.518 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.519 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.519 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:43.519 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:43.519 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:43.519 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:43.519 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:43.519 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:43.780 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.781 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:43.781 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.781 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.781 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:43.781 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:44.042 request: 00:21:44.042 { 00:21:44.042 "name": "nvme0", 00:21:44.042 "trtype": "tcp", 00:21:44.042 "traddr": "10.0.0.2", 00:21:44.042 "adrfam": "ipv4", 00:21:44.042 "trsvcid": "4420", 00:21:44.042 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:44.042 "prchk_reftag": false, 00:21:44.042 "prchk_guard": false, 00:21:44.042 "hdgst": false, 00:21:44.042 "ddgst": false, 00:21:44.042 "dhchap_key": "key0", 00:21:44.042 "dhchap_ctrlr_key": "key1", 00:21:44.042 "allow_unrecognized_csi": false, 00:21:44.042 "method": "bdev_nvme_attach_controller", 00:21:44.042 "req_id": 1 00:21:44.042 } 00:21:44.042 Got JSON-RPC error response 00:21:44.042 response: 00:21:44.042 { 00:21:44.042 "code": -5, 00:21:44.042 "message": "Input/output error" 00:21:44.042 } 00:21:44.042 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:44.042 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.042 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.042 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.042 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:44.042 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:44.042 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:44.302 nvme0n1 00:21:44.303 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:44.303 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:44.303 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.564 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.564 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.564 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.564 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:44.564 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.564 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.876 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.876 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:44.876 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:44.876 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:45.484 nvme0n1 00:21:45.484 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:45.484 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:45.484 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.484 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.484 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:45.745 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.745 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.745 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.745 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:45.745 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.745 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:45.745 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.745 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:45.745 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: --dhchap-ctrl-secret DHHC-1:03:Zjc0OTVmMTZhMDQzMGM0MDA3OWJjNGQ5YWJmZjFkNTI5YTJlZjdjZWVlMDc5Mzk1YjkxMDRlM2IyOTNmNjc5Zh1J2fI=: 00:21:46.317 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:46.317 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:46.317 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:46.317 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:46.317 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:46.317 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:46.578 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:46.578 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.578 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:46.578 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:47.150 request: 00:21:47.151 { 00:21:47.151 "name": "nvme0", 00:21:47.151 "trtype": "tcp", 00:21:47.151 "traddr": "10.0.0.2", 00:21:47.151 "adrfam": "ipv4", 00:21:47.151 "trsvcid": "4420", 00:21:47.151 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:47.151 "prchk_reftag": false, 00:21:47.151 "prchk_guard": false, 00:21:47.151 "hdgst": false, 00:21:47.151 "ddgst": false, 00:21:47.151 "dhchap_key": "key1", 00:21:47.151 "allow_unrecognized_csi": false, 00:21:47.151 "method": "bdev_nvme_attach_controller", 00:21:47.151 "req_id": 1 00:21:47.151 } 00:21:47.151 Got JSON-RPC error response 00:21:47.151 response: 00:21:47.151 { 00:21:47.151 "code": -5, 00:21:47.151 "message": "Input/output error" 00:21:47.151 } 00:21:47.151 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:47.151 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.151 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.151 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.151 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:47.151 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:47.151 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:47.722 nvme0n1 00:21:47.722 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:47.722 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:47.722 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.983 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.983 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.983 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.244 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.244 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.244 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.244 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.244 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:48.244 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:48.244 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:48.504 nvme0n1 00:21:48.505 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:48.505 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:48.505 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.505 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.505 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.505 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: '' 2s 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: ]] 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDYxMjAyYWUyYzlkY2UyNzRlNWVhNWVlODYxZjhhOTD503Dm: 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:48.765 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:50.678 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:50.678 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:50.678 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:50.678 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: 2s 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: ]] 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzYxYTQyZWZhYTE5MDU3NTQ4NjllMWQxYjc4ZWYxZmJiNWFjMTI2NTQxM2ZhNDhldwdfKA==: 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:50.939 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:52.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:53.809 nvme0n1 00:21:53.810 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:53.810 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.810 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.810 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.810 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:53.810 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.069 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:54.070 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:54.070 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.329 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.329 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:54.329 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.329 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.329 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.329 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:54.329 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:54.590 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:55.162 request: 00:21:55.162 { 00:21:55.162 "name": "nvme0", 00:21:55.162 "dhchap_key": "key1", 00:21:55.162 "dhchap_ctrlr_key": "key3", 00:21:55.162 "method": "bdev_nvme_set_keys", 00:21:55.162 "req_id": 1 00:21:55.162 } 00:21:55.162 Got JSON-RPC error response 00:21:55.162 response: 00:21:55.162 { 00:21:55.162 "code": -13, 00:21:55.162 "message": "Permission denied" 00:21:55.162 } 00:21:55.162 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:55.162 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:55.162 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:55.162 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:55.162 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:55.162 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:55.162 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.423 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:55.423 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:56.365 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:56.365 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:56.365 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.627 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:56.627 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:56.627 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.627 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.627 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.627 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.627 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.627 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:57.199 nvme0n1 00:21:57.199 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.199 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.200 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.771 request: 00:21:57.771 { 00:21:57.771 "name": "nvme0", 00:21:57.771 "dhchap_key": "key2", 00:21:57.771 "dhchap_ctrlr_key": "key0", 00:21:57.771 "method": "bdev_nvme_set_keys", 00:21:57.771 "req_id": 1 00:21:57.771 } 00:21:57.771 Got JSON-RPC error response 00:21:57.771 response: 00:21:57.771 { 00:21:57.771 "code": -13, 00:21:57.771 "message": "Permission denied" 00:21:57.771 } 00:21:57.771 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.771 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.771 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.771 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.771 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:57.771 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:57.771 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.033 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:58.033 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 896811 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 896811 ']' 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 896811 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.976 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 896811 00:21:59.237 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:59.237 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:59.237 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 896811' 00:21:59.237 killing process with pid 896811 00:21:59.237 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 896811 00:21:59.237 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 896811 00:21:59.238 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:59.238 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.238 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:59.238 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.238 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:59.238 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.238 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.238 rmmod nvme_tcp 00:21:59.499 rmmod nvme_fabrics 00:21:59.499 rmmod nvme_keyring 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 922425 ']' 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 922425 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 922425 ']' 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 922425 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.499 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 922425 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 922425' 00:21:59.499 killing process with pid 922425 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 922425 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 922425 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.499 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.KHY /tmp/spdk.key-sha256.nFX /tmp/spdk.key-sha384.AnF /tmp/spdk.key-sha512.Zjd /tmp/spdk.key-sha512.tRB /tmp/spdk.key-sha384.cib /tmp/spdk.key-sha256.awm '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:02.051 00:22:02.051 real 2m37.151s 00:22:02.051 user 5m53.269s 00:22:02.051 sys 0m25.018s 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.051 ************************************ 00:22:02.051 END TEST nvmf_auth_target 00:22:02.051 ************************************ 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:02.051 ************************************ 00:22:02.051 START TEST nvmf_bdevio_no_huge 00:22:02.051 ************************************ 00:22:02.051 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:02.051 * Looking for test storage... 00:22:02.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.052 --rc genhtml_branch_coverage=1 00:22:02.052 --rc genhtml_function_coverage=1 00:22:02.052 --rc genhtml_legend=1 00:22:02.052 --rc geninfo_all_blocks=1 00:22:02.052 --rc geninfo_unexecuted_blocks=1 00:22:02.052 00:22:02.052 ' 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.052 --rc genhtml_branch_coverage=1 00:22:02.052 --rc genhtml_function_coverage=1 00:22:02.052 --rc genhtml_legend=1 00:22:02.052 --rc geninfo_all_blocks=1 00:22:02.052 --rc geninfo_unexecuted_blocks=1 00:22:02.052 00:22:02.052 ' 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.052 --rc genhtml_branch_coverage=1 00:22:02.052 --rc genhtml_function_coverage=1 00:22:02.052 --rc genhtml_legend=1 00:22:02.052 --rc geninfo_all_blocks=1 00:22:02.052 --rc geninfo_unexecuted_blocks=1 00:22:02.052 00:22:02.052 ' 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.052 --rc genhtml_branch_coverage=1 00:22:02.052 --rc genhtml_function_coverage=1 00:22:02.052 --rc genhtml_legend=1 00:22:02.052 --rc geninfo_all_blocks=1 00:22:02.052 --rc geninfo_unexecuted_blocks=1 00:22:02.052 00:22:02.052 ' 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.052 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.053 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:10.213 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:10.213 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.213 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:10.214 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:10.214 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:22:10.214 00:22:10.214 --- 10.0.0.2 ping statistics --- 00:22:10.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.214 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:22:10.214 00:22:10.214 --- 10.0.0.1 ping statistics --- 00:22:10.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.214 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=931222 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 931222 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 931222 ']' 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.214 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.214 [2024-11-29 13:06:11.961230] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:10.214 [2024-11-29 13:06:11.961304] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:10.214 [2024-11-29 13:06:12.067431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.214 [2024-11-29 13:06:12.128020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.214 [2024-11-29 13:06:12.128068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.214 [2024-11-29 13:06:12.128077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.214 [2024-11-29 13:06:12.128084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.214 [2024-11-29 13:06:12.128090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.214 [2024-11-29 13:06:12.129894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:10.214 [2024-11-29 13:06:12.130026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:10.214 [2024-11-29 13:06:12.130205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:10.214 [2024-11-29 13:06:12.130253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.214 [2024-11-29 13:06:12.823066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.214 Malloc0 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.214 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:10.215 [2024-11-29 13:06:12.876935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.215 { 00:22:10.215 "params": { 00:22:10.215 "name": "Nvme$subsystem", 00:22:10.215 "trtype": "$TEST_TRANSPORT", 00:22:10.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.215 "adrfam": "ipv4", 00:22:10.215 "trsvcid": "$NVMF_PORT", 00:22:10.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.215 "hdgst": ${hdgst:-false}, 00:22:10.215 "ddgst": ${ddgst:-false} 00:22:10.215 }, 00:22:10.215 "method": "bdev_nvme_attach_controller" 00:22:10.215 } 00:22:10.215 EOF 00:22:10.215 )") 00:22:10.215 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:10.477 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:10.477 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:10.477 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:10.477 "params": { 00:22:10.477 "name": "Nvme1", 00:22:10.477 "trtype": "tcp", 00:22:10.477 "traddr": "10.0.0.2", 00:22:10.477 "adrfam": "ipv4", 00:22:10.477 "trsvcid": "4420", 00:22:10.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.477 "hdgst": false, 00:22:10.477 "ddgst": false 00:22:10.477 }, 00:22:10.477 "method": "bdev_nvme_attach_controller" 00:22:10.477 }' 00:22:10.477 [2024-11-29 13:06:12.936130] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:10.477 [2024-11-29 13:06:12.936204] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid931488 ] 00:22:10.477 [2024-11-29 13:06:13.031328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:10.477 [2024-11-29 13:06:13.091716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.477 [2024-11-29 13:06:13.091880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.477 [2024-11-29 13:06:13.091880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.739 I/O targets: 00:22:10.739 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:10.739 00:22:10.739 00:22:10.739 CUnit - A unit testing framework for C - Version 2.1-3 00:22:10.739 http://cunit.sourceforge.net/ 00:22:10.739 00:22:10.739 00:22:10.739 Suite: bdevio tests on: Nvme1n1 00:22:10.739 Test: blockdev write read block ...passed 00:22:10.739 Test: blockdev write zeroes read block ...passed 00:22:10.739 Test: blockdev write zeroes read no split ...passed 00:22:10.999 Test: blockdev write zeroes read split ...passed 00:22:10.999 Test: blockdev write zeroes read split partial ...passed 00:22:10.999 Test: blockdev reset ...[2024-11-29 13:06:13.448238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:10.999 [2024-11-29 13:06:13.448314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb9810 (9): Bad file descriptor 00:22:10.999 [2024-11-29 13:06:13.465946] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:10.999 passed 00:22:10.999 Test: blockdev write read 8 blocks ...passed 00:22:10.999 Test: blockdev write read size > 128k ...passed 00:22:10.999 Test: blockdev write read invalid size ...passed 00:22:10.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:10.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:10.999 Test: blockdev write read max offset ...passed 00:22:10.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:10.999 Test: blockdev writev readv 8 blocks ...passed 00:22:10.999 Test: blockdev writev readv 30 x 1block ...passed 00:22:11.259 Test: blockdev writev readv block ...passed 00:22:11.259 Test: blockdev writev readv size > 128k ...passed 00:22:11.259 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:11.259 Test: blockdev comparev and writev ...[2024-11-29 13:06:13.686532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.259 [2024-11-29 13:06:13.686579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.686597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.259 [2024-11-29 13:06:13.686606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.687033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.259 [2024-11-29 13:06:13.687045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.687059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.259 [2024-11-29 13:06:13.687068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.687499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.259 [2024-11-29 13:06:13.687512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.687527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.259 [2024-11-29 13:06:13.687535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.687960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.259 [2024-11-29 13:06:13.687973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.687988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.259 [2024-11-29 13:06:13.687997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:11.259 passed 00:22:11.259 Test: blockdev nvme passthru rw ...passed 00:22:11.259 Test: blockdev nvme passthru vendor specific ...[2024-11-29 13:06:13.772734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.259 [2024-11-29 13:06:13.772750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.773000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.259 [2024-11-29 13:06:13.773012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.773276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.259 [2024-11-29 13:06:13.773288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:11.259 [2024-11-29 13:06:13.773504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.259 [2024-11-29 13:06:13.773514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:11.259 passed 00:22:11.259 Test: blockdev nvme admin passthru ...passed 00:22:11.259 Test: blockdev copy ...passed 00:22:11.259 00:22:11.259 Run Summary: Type Total Ran Passed Failed Inactive 00:22:11.259 suites 1 1 n/a 0 0 00:22:11.259 tests 23 23 23 0 0 00:22:11.259 asserts 152 152 152 0 n/a 00:22:11.259 00:22:11.259 Elapsed time = 1.055 seconds 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:11.520 rmmod nvme_tcp 00:22:11.520 rmmod nvme_fabrics 00:22:11.520 rmmod nvme_keyring 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 931222 ']' 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 931222 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 931222 ']' 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 931222 00:22:11.520 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:11.782 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.782 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 931222 00:22:11.782 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:11.782 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:11.782 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 931222' 00:22:11.782 killing process with pid 931222 00:22:11.782 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 931222 00:22:11.782 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 931222 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.044 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:14.597 00:22:14.597 real 0m12.422s 00:22:14.597 user 0m13.717s 00:22:14.597 sys 0m6.755s 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.597 ************************************ 00:22:14.597 END TEST nvmf_bdevio_no_huge 00:22:14.597 ************************************ 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:14.597 ************************************ 00:22:14.597 START TEST nvmf_tls 00:22:14.597 ************************************ 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:14.597 * Looking for test storage... 00:22:14.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:14.597 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:14.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.597 --rc genhtml_branch_coverage=1 00:22:14.597 --rc genhtml_function_coverage=1 00:22:14.597 --rc genhtml_legend=1 00:22:14.597 --rc geninfo_all_blocks=1 00:22:14.597 --rc geninfo_unexecuted_blocks=1 00:22:14.597 00:22:14.597 ' 00:22:14.597 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:14.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.597 --rc genhtml_branch_coverage=1 00:22:14.597 --rc genhtml_function_coverage=1 00:22:14.597 --rc genhtml_legend=1 00:22:14.597 --rc geninfo_all_blocks=1 00:22:14.597 --rc geninfo_unexecuted_blocks=1 00:22:14.597 00:22:14.597 ' 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:14.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.598 --rc genhtml_branch_coverage=1 00:22:14.598 --rc genhtml_function_coverage=1 00:22:14.598 --rc genhtml_legend=1 00:22:14.598 --rc geninfo_all_blocks=1 00:22:14.598 --rc geninfo_unexecuted_blocks=1 00:22:14.598 00:22:14.598 ' 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:14.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.598 --rc genhtml_branch_coverage=1 00:22:14.598 --rc genhtml_function_coverage=1 00:22:14.598 --rc genhtml_legend=1 00:22:14.598 --rc geninfo_all_blocks=1 00:22:14.598 --rc geninfo_unexecuted_blocks=1 00:22:14.598 00:22:14.598 ' 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:14.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.598 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.747 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.747 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.747 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.747 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.747 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.747 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:22.748 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:22.748 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:22.748 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:22.748 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:22:22.748 00:22:22.748 --- 10.0.0.2 ping statistics --- 00:22:22.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.748 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:22:22.748 00:22:22.748 --- 10.0.0.1 ping statistics --- 00:22:22.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.748 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:22.748 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=935996 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 935996 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 935996 ']' 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.749 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.749 [2024-11-29 13:06:24.715407] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:22.749 [2024-11-29 13:06:24.715472] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.749 [2024-11-29 13:06:24.819044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.749 [2024-11-29 13:06:24.870007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.749 [2024-11-29 13:06:24.870051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.749 [2024-11-29 13:06:24.870061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.749 [2024-11-29 13:06:24.870068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.749 [2024-11-29 13:06:24.870074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.749 [2024-11-29 13:06:24.870810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.010 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.010 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:23.010 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.010 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.010 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.010 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.010 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:23.010 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:23.271 true 00:22:23.271 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.271 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:23.532 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:23.532 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:23.532 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:23.532 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.532 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:23.793 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:23.793 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:23.793 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:24.055 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.055 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:24.055 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:24.055 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:24.055 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.055 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:24.316 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:24.316 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:24.316 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:24.577 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.577 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:24.838 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:24.838 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:24.838 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:24.838 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.838 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.lVvNSn6xoQ 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.pwvLxzFEnO 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.lVvNSn6xoQ 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.pwvLxzFEnO 00:22:25.099 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:25.359 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:25.619 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.lVvNSn6xoQ 00:22:25.619 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lVvNSn6xoQ 00:22:25.619 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.879 [2024-11-29 13:06:28.330127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.879 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.879 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:26.138 [2024-11-29 13:06:28.687008] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.138 [2024-11-29 13:06:28.687217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.138 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:26.398 malloc0 00:22:26.398 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.398 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lVvNSn6xoQ 00:22:26.657 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:26.916 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.lVvNSn6xoQ 00:22:36.913 Initializing NVMe Controllers 00:22:36.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.913 Initialization complete. Launching workers. 00:22:36.913 ======================================================== 00:22:36.913 Latency(us) 00:22:36.913 Device Information : IOPS MiB/s Average min max 00:22:36.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18493.99 72.24 3460.80 1143.83 5139.23 00:22:36.913 ======================================================== 00:22:36.913 Total : 18493.99 72.24 3460.80 1143.83 5139.23 00:22:36.913 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lVvNSn6xoQ 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lVvNSn6xoQ 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=938889 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 938889 /var/tmp/bdevperf.sock 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 938889 ']' 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.913 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.913 [2024-11-29 13:06:39.534139] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:36.913 [2024-11-29 13:06:39.534202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938889 ] 00:22:37.174 [2024-11-29 13:06:39.620813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.174 [2024-11-29 13:06:39.656022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.748 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.748 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:37.748 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lVvNSn6xoQ 00:22:38.010 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:38.010 [2024-11-29 13:06:40.636427] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:38.271 TLSTESTn1 00:22:38.271 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:38.271 Running I/O for 10 seconds... 00:22:40.156 4473.00 IOPS, 17.47 MiB/s [2024-11-29T12:06:44.221Z] 4403.00 IOPS, 17.20 MiB/s [2024-11-29T12:06:45.163Z] 4836.67 IOPS, 18.89 MiB/s [2024-11-29T12:06:46.105Z] 5118.75 IOPS, 20.00 MiB/s [2024-11-29T12:06:47.047Z] 5135.60 IOPS, 20.06 MiB/s [2024-11-29T12:06:48.075Z] 5215.67 IOPS, 20.37 MiB/s [2024-11-29T12:06:49.069Z] 5352.57 IOPS, 20.91 MiB/s [2024-11-29T12:06:50.011Z] 5369.25 IOPS, 20.97 MiB/s [2024-11-29T12:06:50.954Z] 5392.67 IOPS, 21.07 MiB/s [2024-11-29T12:06:50.954Z] 5400.10 IOPS, 21.09 MiB/s 00:22:48.274 Latency(us) 00:22:48.274 [2024-11-29T12:06:50.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.274 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:48.274 Verification LBA range: start 0x0 length 0x2000 00:22:48.274 TLSTESTn1 : 10.02 5403.59 21.11 0.00 0.00 23651.72 4778.67 45875.20 00:22:48.274 [2024-11-29T12:06:50.954Z] =================================================================================================================== 00:22:48.274 [2024-11-29T12:06:50.954Z] Total : 5403.59 21.11 0.00 0.00 23651.72 4778.67 45875.20 00:22:48.274 { 00:22:48.274 "results": [ 00:22:48.274 { 00:22:48.274 "job": "TLSTESTn1", 00:22:48.274 "core_mask": "0x4", 00:22:48.274 "workload": "verify", 00:22:48.274 "status": "finished", 00:22:48.274 "verify_range": { 00:22:48.274 "start": 0, 00:22:48.274 "length": 8192 00:22:48.274 }, 00:22:48.274 "queue_depth": 128, 00:22:48.274 "io_size": 4096, 00:22:48.274 "runtime": 10.01723, 00:22:48.274 "iops": 5403.589615093195, 00:22:48.274 "mibps": 21.107771933957792, 00:22:48.274 "io_failed": 0, 00:22:48.274 "io_timeout": 0, 00:22:48.274 "avg_latency_us": 23651.72212652491, 00:22:48.274 "min_latency_us": 4778.666666666667, 00:22:48.274 "max_latency_us": 45875.2 00:22:48.274 } 00:22:48.274 ], 00:22:48.274 "core_count": 1 00:22:48.274 } 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 938889 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 938889 ']' 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 938889 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 938889 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 938889' 00:22:48.274 killing process with pid 938889 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 938889 00:22:48.274 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.274 00:22:48.274 Latency(us) 00:22:48.274 [2024-11-29T12:06:50.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.274 [2024-11-29T12:06:50.954Z] =================================================================================================================== 00:22:48.274 [2024-11-29T12:06:50.954Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.274 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 938889 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pwvLxzFEnO 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pwvLxzFEnO 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pwvLxzFEnO 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pwvLxzFEnO 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=941233 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 941233 /var/tmp/bdevperf.sock 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 941233 ']' 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.535 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.535 [2024-11-29 13:06:51.101702] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:48.535 [2024-11-29 13:06:51.101760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941233 ] 00:22:48.535 [2024-11-29 13:06:51.183000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.535 [2024-11-29 13:06:51.211525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.497 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.497 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:49.497 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pwvLxzFEnO 00:22:49.497 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:49.759 [2024-11-29 13:06:52.207026] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.759 [2024-11-29 13:06:52.216039] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.759 [2024-11-29 13:06:52.216165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1508be0 (107): Transport endpoint is not connected 00:22:49.759 [2024-11-29 13:06:52.217151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1508be0 (9): Bad file descriptor 00:22:49.759 [2024-11-29 13:06:52.218153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:49.759 [2024-11-29 13:06:52.218164] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.759 [2024-11-29 13:06:52.218170] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:49.759 [2024-11-29 13:06:52.218177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:49.759 request: 00:22:49.759 { 00:22:49.759 "name": "TLSTEST", 00:22:49.759 "trtype": "tcp", 00:22:49.759 "traddr": "10.0.0.2", 00:22:49.759 "adrfam": "ipv4", 00:22:49.759 "trsvcid": "4420", 00:22:49.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.759 "prchk_reftag": false, 00:22:49.759 "prchk_guard": false, 00:22:49.759 "hdgst": false, 00:22:49.759 "ddgst": false, 00:22:49.759 "psk": "key0", 00:22:49.759 "allow_unrecognized_csi": false, 00:22:49.759 "method": "bdev_nvme_attach_controller", 00:22:49.759 "req_id": 1 00:22:49.759 } 00:22:49.759 Got JSON-RPC error response 00:22:49.759 response: 00:22:49.759 { 00:22:49.759 "code": -5, 00:22:49.759 "message": "Input/output error" 00:22:49.759 } 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 941233 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 941233 ']' 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 941233 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 941233 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 941233' 00:22:49.759 killing process with pid 941233 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 941233 00:22:49.759 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.759 00:22:49.759 Latency(us) 00:22:49.759 [2024-11-29T12:06:52.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.759 [2024-11-29T12:06:52.439Z] =================================================================================================================== 00:22:49.759 [2024-11-29T12:06:52.439Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 941233 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lVvNSn6xoQ 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lVvNSn6xoQ 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lVvNSn6xoQ 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lVvNSn6xoQ 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=941428 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 941428 /var/tmp/bdevperf.sock 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 941428 ']' 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.759 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.020 [2024-11-29 13:06:52.445982] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:50.020 [2024-11-29 13:06:52.446039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941428 ] 00:22:50.020 [2024-11-29 13:06:52.527300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.020 [2024-11-29 13:06:52.556104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.590 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.590 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:50.590 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lVvNSn6xoQ 00:22:50.850 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:51.111 [2024-11-29 13:06:53.583695] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.111 [2024-11-29 13:06:53.589294] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:51.111 [2024-11-29 13:06:53.589313] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:51.111 [2024-11-29 13:06:53.589331] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:51.111 [2024-11-29 13:06:53.589853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208fbe0 (107): Transport endpoint is not connected 00:22:51.111 [2024-11-29 13:06:53.590849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208fbe0 (9): Bad file descriptor 00:22:51.111 [2024-11-29 13:06:53.591851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:51.111 [2024-11-29 13:06:53.591858] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:51.111 [2024-11-29 13:06:53.591864] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:51.111 [2024-11-29 13:06:53.591870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:51.111 request: 00:22:51.111 { 00:22:51.111 "name": "TLSTEST", 00:22:51.111 "trtype": "tcp", 00:22:51.111 "traddr": "10.0.0.2", 00:22:51.111 "adrfam": "ipv4", 00:22:51.111 "trsvcid": "4420", 00:22:51.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.111 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.111 "prchk_reftag": false, 00:22:51.111 "prchk_guard": false, 00:22:51.111 "hdgst": false, 00:22:51.111 "ddgst": false, 00:22:51.111 "psk": "key0", 00:22:51.111 "allow_unrecognized_csi": false, 00:22:51.111 "method": "bdev_nvme_attach_controller", 00:22:51.111 "req_id": 1 00:22:51.111 } 00:22:51.111 Got JSON-RPC error response 00:22:51.111 response: 00:22:51.111 { 00:22:51.111 "code": -5, 00:22:51.111 "message": "Input/output error" 00:22:51.111 } 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 941428 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 941428 ']' 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 941428 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 941428 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 941428' 00:22:51.111 killing process with pid 941428 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 941428 00:22:51.111 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.111 00:22:51.111 Latency(us) 00:22:51.111 [2024-11-29T12:06:53.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.111 [2024-11-29T12:06:53.791Z] =================================================================================================================== 00:22:51.111 [2024-11-29T12:06:53.791Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 941428 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lVvNSn6xoQ 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lVvNSn6xoQ 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lVvNSn6xoQ 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lVvNSn6xoQ 00:22:51.111 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.372 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=941618 00:22:51.372 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.372 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 941618 /var/tmp/bdevperf.sock 00:22:51.372 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.372 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 941618 ']' 00:22:51.372 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.372 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.372 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.373 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.373 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.373 [2024-11-29 13:06:53.839805] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:51.373 [2024-11-29 13:06:53.839865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941618 ] 00:22:51.373 [2024-11-29 13:06:53.922260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.373 [2024-11-29 13:06:53.950809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.315 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.315 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.315 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lVvNSn6xoQ 00:22:52.315 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:52.315 [2024-11-29 13:06:54.970334] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.315 [2024-11-29 13:06:54.974862] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:52.315 [2024-11-29 13:06:54.974879] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:52.315 [2024-11-29 13:06:54.974898] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:52.315 [2024-11-29 13:06:54.975601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70bbe0 (107): Transport endpoint is not connected 00:22:52.315 [2024-11-29 13:06:54.976597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70bbe0 (9): Bad file descriptor 00:22:52.315 [2024-11-29 13:06:54.977598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:52.315 [2024-11-29 13:06:54.977605] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:52.315 [2024-11-29 13:06:54.977611] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:52.315 [2024-11-29 13:06:54.977617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:52.315 request: 00:22:52.315 { 00:22:52.315 "name": "TLSTEST", 00:22:52.315 "trtype": "tcp", 00:22:52.315 "traddr": "10.0.0.2", 00:22:52.315 "adrfam": "ipv4", 00:22:52.315 "trsvcid": "4420", 00:22:52.315 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:52.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.315 "prchk_reftag": false, 00:22:52.315 "prchk_guard": false, 00:22:52.315 "hdgst": false, 00:22:52.315 "ddgst": false, 00:22:52.315 "psk": "key0", 00:22:52.315 "allow_unrecognized_csi": false, 00:22:52.315 "method": "bdev_nvme_attach_controller", 00:22:52.315 "req_id": 1 00:22:52.315 } 00:22:52.315 Got JSON-RPC error response 00:22:52.315 response: 00:22:52.315 { 00:22:52.315 "code": -5, 00:22:52.315 "message": "Input/output error" 00:22:52.315 } 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 941618 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 941618 ']' 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 941618 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 941618 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 941618' 00:22:52.590 killing process with pid 941618 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 941618 00:22:52.590 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.590 00:22:52.590 Latency(us) 00:22:52.590 [2024-11-29T12:06:55.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.590 [2024-11-29T12:06:55.270Z] =================================================================================================================== 00:22:52.590 [2024-11-29T12:06:55.270Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 941618 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=941946 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 941946 /var/tmp/bdevperf.sock 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 941946 ']' 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.590 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.591 [2024-11-29 13:06:55.218813] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:52.591 [2024-11-29 13:06:55.218870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941946 ] 00:22:52.852 [2024-11-29 13:06:55.302835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.852 [2024-11-29 13:06:55.329478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.423 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.423 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.423 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:53.684 [2024-11-29 13:06:56.172869] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:53.684 [2024-11-29 13:06:56.172895] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:53.684 request: 00:22:53.684 { 00:22:53.684 "name": "key0", 00:22:53.684 "path": "", 00:22:53.684 "method": "keyring_file_add_key", 00:22:53.684 "req_id": 1 00:22:53.684 } 00:22:53.684 Got JSON-RPC error response 00:22:53.684 response: 00:22:53.684 { 00:22:53.684 "code": -1, 00:22:53.684 "message": "Operation not permitted" 00:22:53.684 } 00:22:53.684 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.684 [2024-11-29 13:06:56.353398] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.684 [2024-11-29 13:06:56.353423] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:53.684 request: 00:22:53.684 { 00:22:53.684 "name": "TLSTEST", 00:22:53.684 "trtype": "tcp", 00:22:53.684 "traddr": "10.0.0.2", 00:22:53.684 "adrfam": "ipv4", 00:22:53.684 "trsvcid": "4420", 00:22:53.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.684 "prchk_reftag": false, 00:22:53.684 "prchk_guard": false, 00:22:53.684 "hdgst": false, 00:22:53.684 "ddgst": false, 00:22:53.684 "psk": "key0", 00:22:53.684 "allow_unrecognized_csi": false, 00:22:53.684 "method": "bdev_nvme_attach_controller", 00:22:53.684 "req_id": 1 00:22:53.684 } 00:22:53.685 Got JSON-RPC error response 00:22:53.685 response: 00:22:53.685 { 00:22:53.685 "code": -126, 00:22:53.685 "message": "Required key not available" 00:22:53.685 } 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 941946 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 941946 ']' 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 941946 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 941946 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 941946' 00:22:53.946 killing process with pid 941946 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 941946 00:22:53.946 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.946 00:22:53.946 Latency(us) 00:22:53.946 [2024-11-29T12:06:56.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.946 [2024-11-29T12:06:56.626Z] =================================================================================================================== 00:22:53.946 [2024-11-29T12:06:56.626Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 941946 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 935996 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 935996 ']' 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 935996 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 935996 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 935996' 00:22:53.946 killing process with pid 935996 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 935996 00:22:53.946 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 935996 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.rkkm9JK3VS 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.rkkm9JK3VS 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=942300 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 942300 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 942300 ']' 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.207 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.207 [2024-11-29 13:06:56.846764] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:54.207 [2024-11-29 13:06:56.846836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.468 [2024-11-29 13:06:56.929622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.468 [2024-11-29 13:06:56.958053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.468 [2024-11-29 13:06:56.958079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.468 [2024-11-29 13:06:56.958085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.468 [2024-11-29 13:06:56.958089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.468 [2024-11-29 13:06:56.958094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.468 [2024-11-29 13:06:56.958535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.039 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.039 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:55.039 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.039 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.039 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.039 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.039 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.rkkm9JK3VS 00:22:55.039 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rkkm9JK3VS 00:22:55.039 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.299 [2024-11-29 13:06:57.810929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.299 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.560 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.560 [2024-11-29 13:06:58.147750] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.560 [2024-11-29 13:06:58.147965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.560 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:55.821 malloc0 00:22:55.821 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:55.821 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rkkm9JK3VS 00:22:56.081 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rkkm9JK3VS 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rkkm9JK3VS 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=942666 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 942666 /var/tmp/bdevperf.sock 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 942666 ']' 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.343 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.343 [2024-11-29 13:06:58.860048] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:22:56.343 [2024-11-29 13:06:58.860102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942666 ] 00:22:56.343 [2024-11-29 13:06:58.941282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.343 [2024-11-29 13:06:58.970698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.287 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.287 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:57.287 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rkkm9JK3VS 00:22:57.287 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.287 [2024-11-29 13:06:59.946337] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.547 TLSTESTn1 00:22:57.548 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:57.548 Running I/O for 10 seconds... 00:22:59.874 4809.00 IOPS, 18.79 MiB/s [2024-11-29T12:07:03.497Z] 5211.00 IOPS, 20.36 MiB/s [2024-11-29T12:07:04.437Z] 5495.33 IOPS, 21.47 MiB/s [2024-11-29T12:07:05.379Z] 5671.00 IOPS, 22.15 MiB/s [2024-11-29T12:07:06.319Z] 5675.80 IOPS, 22.17 MiB/s [2024-11-29T12:07:07.262Z] 5771.83 IOPS, 22.55 MiB/s [2024-11-29T12:07:08.204Z] 5717.43 IOPS, 22.33 MiB/s [2024-11-29T12:07:09.590Z] 5722.25 IOPS, 22.35 MiB/s [2024-11-29T12:07:10.163Z] 5710.11 IOPS, 22.31 MiB/s [2024-11-29T12:07:10.425Z] 5678.80 IOPS, 22.18 MiB/s 00:23:07.745 Latency(us) 00:23:07.745 [2024-11-29T12:07:10.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.745 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:07.745 Verification LBA range: start 0x0 length 0x2000 00:23:07.745 TLSTESTn1 : 10.01 5683.88 22.20 0.00 0.00 22487.24 5761.71 24357.55 00:23:07.745 [2024-11-29T12:07:10.425Z] =================================================================================================================== 00:23:07.745 [2024-11-29T12:07:10.425Z] Total : 5683.88 22.20 0.00 0.00 22487.24 5761.71 24357.55 00:23:07.745 { 00:23:07.745 "results": [ 00:23:07.745 { 00:23:07.745 "job": "TLSTESTn1", 00:23:07.745 "core_mask": "0x4", 00:23:07.745 "workload": "verify", 00:23:07.745 "status": "finished", 00:23:07.745 "verify_range": { 00:23:07.745 "start": 0, 00:23:07.745 "length": 8192 00:23:07.745 }, 00:23:07.745 "queue_depth": 128, 00:23:07.745 "io_size": 4096, 00:23:07.745 "runtime": 10.013591, 00:23:07.745 "iops": 5683.8750454257615, 00:23:07.745 "mibps": 22.20263689619438, 00:23:07.745 "io_failed": 0, 00:23:07.745 "io_timeout": 0, 00:23:07.745 "avg_latency_us": 22487.241024199404, 00:23:07.745 "min_latency_us": 5761.706666666667, 00:23:07.745 "max_latency_us": 24357.546666666665 00:23:07.745 } 00:23:07.745 ], 00:23:07.745 "core_count": 1 00:23:07.745 } 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 942666 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 942666 ']' 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 942666 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942666 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942666' 00:23:07.745 killing process with pid 942666 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 942666 00:23:07.745 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.745 00:23:07.745 Latency(us) 00:23:07.745 [2024-11-29T12:07:10.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.745 [2024-11-29T12:07:10.425Z] =================================================================================================================== 00:23:07.745 [2024-11-29T12:07:10.425Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 942666 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.rkkm9JK3VS 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rkkm9JK3VS 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rkkm9JK3VS 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rkkm9JK3VS 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rkkm9JK3VS 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=945005 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 945005 /var/tmp/bdevperf.sock 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 945005 ']' 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.745 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.745 [2024-11-29 13:07:10.422817] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:07.745 [2024-11-29 13:07:10.422873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945005 ] 00:23:08.006 [2024-11-29 13:07:10.508120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.006 [2024-11-29 13:07:10.536386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.578 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.578 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:08.578 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rkkm9JK3VS 00:23:08.840 [2024-11-29 13:07:11.411450] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rkkm9JK3VS': 0100666 00:23:08.840 [2024-11-29 13:07:11.411474] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:08.840 request: 00:23:08.840 { 00:23:08.840 "name": "key0", 00:23:08.840 "path": "/tmp/tmp.rkkm9JK3VS", 00:23:08.840 "method": "keyring_file_add_key", 00:23:08.840 "req_id": 1 00:23:08.840 } 00:23:08.840 Got JSON-RPC error response 00:23:08.840 response: 00:23:08.840 { 00:23:08.840 "code": -1, 00:23:08.840 "message": "Operation not permitted" 00:23:08.840 } 00:23:08.840 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:09.101 [2024-11-29 13:07:11.595979] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.101 [2024-11-29 13:07:11.596005] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:09.101 request: 00:23:09.101 { 00:23:09.101 "name": "TLSTEST", 00:23:09.101 "trtype": "tcp", 00:23:09.101 "traddr": "10.0.0.2", 00:23:09.101 "adrfam": "ipv4", 00:23:09.101 "trsvcid": "4420", 00:23:09.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.101 "prchk_reftag": false, 00:23:09.101 "prchk_guard": false, 00:23:09.101 "hdgst": false, 00:23:09.101 "ddgst": false, 00:23:09.101 "psk": "key0", 00:23:09.101 "allow_unrecognized_csi": false, 00:23:09.101 "method": "bdev_nvme_attach_controller", 00:23:09.101 "req_id": 1 00:23:09.101 } 00:23:09.101 Got JSON-RPC error response 00:23:09.101 response: 00:23:09.101 { 00:23:09.101 "code": -126, 00:23:09.101 "message": "Required key not available" 00:23:09.101 } 00:23:09.101 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 945005 00:23:09.101 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 945005 ']' 00:23:09.101 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 945005 00:23:09.101 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.101 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.101 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 945005 00:23:09.101 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:09.101 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:09.101 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 945005' 00:23:09.102 killing process with pid 945005 00:23:09.102 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 945005 00:23:09.102 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.102 00:23:09.102 Latency(us) 00:23:09.102 [2024-11-29T12:07:11.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.102 [2024-11-29T12:07:11.782Z] =================================================================================================================== 00:23:09.102 [2024-11-29T12:07:11.782Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.102 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 945005 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 942300 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 942300 ']' 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 942300 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942300 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942300' 00:23:09.391 killing process with pid 942300 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 942300 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 942300 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=945349 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 945349 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 945349 ']' 00:23:09.391 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.392 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.392 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.392 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.392 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.392 [2024-11-29 13:07:12.028242] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:09.392 [2024-11-29 13:07:12.028297] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.652 [2024-11-29 13:07:12.119728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.652 [2024-11-29 13:07:12.148448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.652 [2024-11-29 13:07:12.148478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.652 [2024-11-29 13:07:12.148484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.652 [2024-11-29 13:07:12.148488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.652 [2024-11-29 13:07:12.148492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.652 [2024-11-29 13:07:12.148941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.rkkm9JK3VS 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rkkm9JK3VS 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.rkkm9JK3VS 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rkkm9JK3VS 00:23:10.224 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.485 [2024-11-29 13:07:13.025335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.485 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.745 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.745 [2024-11-29 13:07:13.386228] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.745 [2024-11-29 13:07:13.386428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.745 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.006 malloc0 00:23:11.006 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.266 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rkkm9JK3VS 00:23:11.266 [2024-11-29 13:07:13.909348] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rkkm9JK3VS': 0100666 00:23:11.266 [2024-11-29 13:07:13.909370] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:11.266 request: 00:23:11.266 { 00:23:11.266 "name": "key0", 00:23:11.266 "path": "/tmp/tmp.rkkm9JK3VS", 00:23:11.266 "method": "keyring_file_add_key", 00:23:11.266 "req_id": 1 00:23:11.266 } 00:23:11.266 Got JSON-RPC error response 00:23:11.266 response: 00:23:11.266 { 00:23:11.266 "code": -1, 00:23:11.266 "message": "Operation not permitted" 00:23:11.266 } 00:23:11.266 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.527 [2024-11-29 13:07:14.073778] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:11.527 [2024-11-29 13:07:14.073808] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:11.527 request: 00:23:11.527 { 00:23:11.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.527 "host": "nqn.2016-06.io.spdk:host1", 00:23:11.527 "psk": "key0", 00:23:11.527 "method": "nvmf_subsystem_add_host", 00:23:11.527 "req_id": 1 00:23:11.527 } 00:23:11.527 Got JSON-RPC error response 00:23:11.527 response: 00:23:11.527 { 00:23:11.527 "code": -32603, 00:23:11.527 "message": "Internal error" 00:23:11.527 } 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 945349 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 945349 ']' 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 945349 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 945349 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 945349' 00:23:11.527 killing process with pid 945349 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 945349 00:23:11.527 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 945349 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.rkkm9JK3VS 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=945729 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 945729 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 945729 ']' 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.788 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.788 [2024-11-29 13:07:14.332765] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:11.788 [2024-11-29 13:07:14.332822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.788 [2024-11-29 13:07:14.421550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.788 [2024-11-29 13:07:14.449800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.788 [2024-11-29 13:07:14.449831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.788 [2024-11-29 13:07:14.449837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.788 [2024-11-29 13:07:14.449842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.788 [2024-11-29 13:07:14.449846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.788 [2024-11-29 13:07:14.450294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.rkkm9JK3VS 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rkkm9JK3VS 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.726 [2024-11-29 13:07:15.302731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.726 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.986 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:12.986 [2024-11-29 13:07:15.623508] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.986 [2024-11-29 13:07:15.623708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.986 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:13.247 malloc0 00:23:13.247 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.509 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rkkm9JK3VS 00:23:13.509 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=946115 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 946115 /var/tmp/bdevperf.sock 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 946115 ']' 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.769 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.769 [2024-11-29 13:07:16.336891] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:13.769 [2024-11-29 13:07:16.336944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946115 ] 00:23:13.769 [2024-11-29 13:07:16.420704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.030 [2024-11-29 13:07:16.450194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.600 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.600 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:14.600 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rkkm9JK3VS 00:23:14.860 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.860 [2024-11-29 13:07:17.473909] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.121 TLSTESTn1 00:23:15.121 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:15.382 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:15.382 "subsystems": [ 00:23:15.382 { 00:23:15.382 "subsystem": "keyring", 00:23:15.382 "config": [ 00:23:15.382 { 00:23:15.382 "method": "keyring_file_add_key", 00:23:15.382 "params": { 00:23:15.382 "name": "key0", 00:23:15.382 "path": "/tmp/tmp.rkkm9JK3VS" 00:23:15.382 } 00:23:15.382 } 00:23:15.382 ] 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "subsystem": "iobuf", 00:23:15.382 "config": [ 00:23:15.382 { 00:23:15.382 "method": "iobuf_set_options", 00:23:15.382 "params": { 00:23:15.382 "small_pool_count": 8192, 00:23:15.382 "large_pool_count": 1024, 00:23:15.382 "small_bufsize": 8192, 00:23:15.382 "large_bufsize": 135168, 00:23:15.382 "enable_numa": false 00:23:15.382 } 00:23:15.382 } 00:23:15.382 ] 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "subsystem": "sock", 00:23:15.382 "config": [ 00:23:15.382 { 00:23:15.382 "method": "sock_set_default_impl", 00:23:15.382 "params": { 00:23:15.382 "impl_name": "posix" 00:23:15.382 } 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "method": "sock_impl_set_options", 00:23:15.382 "params": { 00:23:15.382 "impl_name": "ssl", 00:23:15.382 "recv_buf_size": 4096, 00:23:15.382 "send_buf_size": 4096, 00:23:15.382 "enable_recv_pipe": true, 00:23:15.382 "enable_quickack": false, 00:23:15.382 "enable_placement_id": 0, 00:23:15.382 "enable_zerocopy_send_server": true, 00:23:15.382 "enable_zerocopy_send_client": false, 00:23:15.382 "zerocopy_threshold": 0, 00:23:15.382 "tls_version": 0, 00:23:15.382 "enable_ktls": false 00:23:15.382 } 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "method": "sock_impl_set_options", 00:23:15.382 "params": { 00:23:15.382 "impl_name": "posix", 00:23:15.382 "recv_buf_size": 2097152, 00:23:15.382 "send_buf_size": 2097152, 00:23:15.382 "enable_recv_pipe": true, 00:23:15.382 "enable_quickack": false, 00:23:15.382 "enable_placement_id": 0, 00:23:15.382 "enable_zerocopy_send_server": true, 00:23:15.382 "enable_zerocopy_send_client": false, 00:23:15.382 "zerocopy_threshold": 0, 00:23:15.382 "tls_version": 0, 00:23:15.382 "enable_ktls": false 00:23:15.382 } 00:23:15.382 } 00:23:15.382 ] 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "subsystem": "vmd", 00:23:15.382 "config": [] 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "subsystem": "accel", 00:23:15.382 "config": [ 00:23:15.382 { 00:23:15.382 "method": "accel_set_options", 00:23:15.382 "params": { 00:23:15.382 "small_cache_size": 128, 00:23:15.382 "large_cache_size": 16, 00:23:15.382 "task_count": 2048, 00:23:15.382 "sequence_count": 2048, 00:23:15.382 "buf_count": 2048 00:23:15.382 } 00:23:15.382 } 00:23:15.382 ] 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "subsystem": "bdev", 00:23:15.382 "config": [ 00:23:15.382 { 00:23:15.382 "method": "bdev_set_options", 00:23:15.382 "params": { 00:23:15.382 "bdev_io_pool_size": 65535, 00:23:15.382 "bdev_io_cache_size": 256, 00:23:15.382 "bdev_auto_examine": true, 00:23:15.382 "iobuf_small_cache_size": 128, 00:23:15.382 "iobuf_large_cache_size": 16 00:23:15.382 } 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "method": "bdev_raid_set_options", 00:23:15.382 "params": { 00:23:15.382 "process_window_size_kb": 1024, 00:23:15.382 "process_max_bandwidth_mb_sec": 0 00:23:15.382 } 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "method": "bdev_iscsi_set_options", 00:23:15.382 "params": { 00:23:15.382 "timeout_sec": 30 00:23:15.382 } 00:23:15.382 }, 00:23:15.382 { 00:23:15.382 "method": "bdev_nvme_set_options", 00:23:15.382 "params": { 00:23:15.382 "action_on_timeout": "none", 00:23:15.382 "timeout_us": 0, 00:23:15.382 "timeout_admin_us": 0, 00:23:15.382 "keep_alive_timeout_ms": 10000, 00:23:15.382 "arbitration_burst": 0, 00:23:15.382 "low_priority_weight": 0, 00:23:15.382 "medium_priority_weight": 0, 00:23:15.382 "high_priority_weight": 0, 00:23:15.382 "nvme_adminq_poll_period_us": 10000, 00:23:15.382 "nvme_ioq_poll_period_us": 0, 00:23:15.382 "io_queue_requests": 0, 00:23:15.382 "delay_cmd_submit": true, 00:23:15.382 "transport_retry_count": 4, 00:23:15.382 "bdev_retry_count": 3, 00:23:15.382 "transport_ack_timeout": 0, 00:23:15.382 "ctrlr_loss_timeout_sec": 0, 00:23:15.382 "reconnect_delay_sec": 0, 00:23:15.382 "fast_io_fail_timeout_sec": 0, 00:23:15.382 "disable_auto_failback": false, 00:23:15.382 "generate_uuids": false, 00:23:15.382 "transport_tos": 0, 00:23:15.382 "nvme_error_stat": false, 00:23:15.382 "rdma_srq_size": 0, 00:23:15.382 "io_path_stat": false, 00:23:15.382 "allow_accel_sequence": false, 00:23:15.382 "rdma_max_cq_size": 0, 00:23:15.382 "rdma_cm_event_timeout_ms": 0, 00:23:15.382 "dhchap_digests": [ 00:23:15.382 "sha256", 00:23:15.382 "sha384", 00:23:15.382 "sha512" 00:23:15.382 ], 00:23:15.382 "dhchap_dhgroups": [ 00:23:15.382 "null", 00:23:15.382 "ffdhe2048", 00:23:15.383 "ffdhe3072", 00:23:15.383 "ffdhe4096", 00:23:15.383 "ffdhe6144", 00:23:15.383 "ffdhe8192" 00:23:15.383 ] 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "bdev_nvme_set_hotplug", 00:23:15.383 "params": { 00:23:15.383 "period_us": 100000, 00:23:15.383 "enable": false 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "bdev_malloc_create", 00:23:15.383 "params": { 00:23:15.383 "name": "malloc0", 00:23:15.383 "num_blocks": 8192, 00:23:15.383 "block_size": 4096, 00:23:15.383 "physical_block_size": 4096, 00:23:15.383 "uuid": "0b339fc2-f684-4734-a07a-c6132260fcf2", 00:23:15.383 "optimal_io_boundary": 0, 00:23:15.383 "md_size": 0, 00:23:15.383 "dif_type": 0, 00:23:15.383 "dif_is_head_of_md": false, 00:23:15.383 "dif_pi_format": 0 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "bdev_wait_for_examine" 00:23:15.383 } 00:23:15.383 ] 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "subsystem": "nbd", 00:23:15.383 "config": [] 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "subsystem": "scheduler", 00:23:15.383 "config": [ 00:23:15.383 { 00:23:15.383 "method": "framework_set_scheduler", 00:23:15.383 "params": { 00:23:15.383 "name": "static" 00:23:15.383 } 00:23:15.383 } 00:23:15.383 ] 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "subsystem": "nvmf", 00:23:15.383 "config": [ 00:23:15.383 { 00:23:15.383 "method": "nvmf_set_config", 00:23:15.383 "params": { 00:23:15.383 "discovery_filter": "match_any", 00:23:15.383 "admin_cmd_passthru": { 00:23:15.383 "identify_ctrlr": false 00:23:15.383 }, 00:23:15.383 "dhchap_digests": [ 00:23:15.383 "sha256", 00:23:15.383 "sha384", 00:23:15.383 "sha512" 00:23:15.383 ], 00:23:15.383 "dhchap_dhgroups": [ 00:23:15.383 "null", 00:23:15.383 "ffdhe2048", 00:23:15.383 "ffdhe3072", 00:23:15.383 "ffdhe4096", 00:23:15.383 "ffdhe6144", 00:23:15.383 "ffdhe8192" 00:23:15.383 ] 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "nvmf_set_max_subsystems", 00:23:15.383 "params": { 00:23:15.383 "max_subsystems": 1024 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "nvmf_set_crdt", 00:23:15.383 "params": { 00:23:15.383 "crdt1": 0, 00:23:15.383 "crdt2": 0, 00:23:15.383 "crdt3": 0 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "nvmf_create_transport", 00:23:15.383 "params": { 00:23:15.383 "trtype": "TCP", 00:23:15.383 "max_queue_depth": 128, 00:23:15.383 "max_io_qpairs_per_ctrlr": 127, 00:23:15.383 "in_capsule_data_size": 4096, 00:23:15.383 "max_io_size": 131072, 00:23:15.383 "io_unit_size": 131072, 00:23:15.383 "max_aq_depth": 128, 00:23:15.383 "num_shared_buffers": 511, 00:23:15.383 "buf_cache_size": 4294967295, 00:23:15.383 "dif_insert_or_strip": false, 00:23:15.383 "zcopy": false, 00:23:15.383 "c2h_success": false, 00:23:15.383 "sock_priority": 0, 00:23:15.383 "abort_timeout_sec": 1, 00:23:15.383 "ack_timeout": 0, 00:23:15.383 "data_wr_pool_size": 0 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "nvmf_create_subsystem", 00:23:15.383 "params": { 00:23:15.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.383 "allow_any_host": false, 00:23:15.383 "serial_number": "SPDK00000000000001", 00:23:15.383 "model_number": "SPDK bdev Controller", 00:23:15.383 "max_namespaces": 10, 00:23:15.383 "min_cntlid": 1, 00:23:15.383 "max_cntlid": 65519, 00:23:15.383 "ana_reporting": false 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "nvmf_subsystem_add_host", 00:23:15.383 "params": { 00:23:15.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.383 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.383 "psk": "key0" 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "nvmf_subsystem_add_ns", 00:23:15.383 "params": { 00:23:15.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.383 "namespace": { 00:23:15.383 "nsid": 1, 00:23:15.383 "bdev_name": "malloc0", 00:23:15.383 "nguid": "0B339FC2F6844734A07AC6132260FCF2", 00:23:15.383 "uuid": "0b339fc2-f684-4734-a07a-c6132260fcf2", 00:23:15.383 "no_auto_visible": false 00:23:15.383 } 00:23:15.383 } 00:23:15.383 }, 00:23:15.383 { 00:23:15.383 "method": "nvmf_subsystem_add_listener", 00:23:15.383 "params": { 00:23:15.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.383 "listen_address": { 00:23:15.383 "trtype": "TCP", 00:23:15.383 "adrfam": "IPv4", 00:23:15.383 "traddr": "10.0.0.2", 00:23:15.383 "trsvcid": "4420" 00:23:15.383 }, 00:23:15.383 "secure_channel": true 00:23:15.383 } 00:23:15.383 } 00:23:15.383 ] 00:23:15.383 } 00:23:15.383 ] 00:23:15.383 }' 00:23:15.383 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:15.646 "subsystems": [ 00:23:15.646 { 00:23:15.646 "subsystem": "keyring", 00:23:15.646 "config": [ 00:23:15.646 { 00:23:15.646 "method": "keyring_file_add_key", 00:23:15.646 "params": { 00:23:15.646 "name": "key0", 00:23:15.646 "path": "/tmp/tmp.rkkm9JK3VS" 00:23:15.646 } 00:23:15.646 } 00:23:15.646 ] 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "subsystem": "iobuf", 00:23:15.646 "config": [ 00:23:15.646 { 00:23:15.646 "method": "iobuf_set_options", 00:23:15.646 "params": { 00:23:15.646 "small_pool_count": 8192, 00:23:15.646 "large_pool_count": 1024, 00:23:15.646 "small_bufsize": 8192, 00:23:15.646 "large_bufsize": 135168, 00:23:15.646 "enable_numa": false 00:23:15.646 } 00:23:15.646 } 00:23:15.646 ] 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "subsystem": "sock", 00:23:15.646 "config": [ 00:23:15.646 { 00:23:15.646 "method": "sock_set_default_impl", 00:23:15.646 "params": { 00:23:15.646 "impl_name": "posix" 00:23:15.646 } 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "method": "sock_impl_set_options", 00:23:15.646 "params": { 00:23:15.646 "impl_name": "ssl", 00:23:15.646 "recv_buf_size": 4096, 00:23:15.646 "send_buf_size": 4096, 00:23:15.646 "enable_recv_pipe": true, 00:23:15.646 "enable_quickack": false, 00:23:15.646 "enable_placement_id": 0, 00:23:15.646 "enable_zerocopy_send_server": true, 00:23:15.646 "enable_zerocopy_send_client": false, 00:23:15.646 "zerocopy_threshold": 0, 00:23:15.646 "tls_version": 0, 00:23:15.646 "enable_ktls": false 00:23:15.646 } 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "method": "sock_impl_set_options", 00:23:15.646 "params": { 00:23:15.646 "impl_name": "posix", 00:23:15.646 "recv_buf_size": 2097152, 00:23:15.646 "send_buf_size": 2097152, 00:23:15.646 "enable_recv_pipe": true, 00:23:15.646 "enable_quickack": false, 00:23:15.646 "enable_placement_id": 0, 00:23:15.646 "enable_zerocopy_send_server": true, 00:23:15.646 "enable_zerocopy_send_client": false, 00:23:15.646 "zerocopy_threshold": 0, 00:23:15.646 "tls_version": 0, 00:23:15.646 "enable_ktls": false 00:23:15.646 } 00:23:15.646 } 00:23:15.646 ] 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "subsystem": "vmd", 00:23:15.646 "config": [] 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "subsystem": "accel", 00:23:15.646 "config": [ 00:23:15.646 { 00:23:15.646 "method": "accel_set_options", 00:23:15.646 "params": { 00:23:15.646 "small_cache_size": 128, 00:23:15.646 "large_cache_size": 16, 00:23:15.646 "task_count": 2048, 00:23:15.646 "sequence_count": 2048, 00:23:15.646 "buf_count": 2048 00:23:15.646 } 00:23:15.646 } 00:23:15.646 ] 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "subsystem": "bdev", 00:23:15.646 "config": [ 00:23:15.646 { 00:23:15.646 "method": "bdev_set_options", 00:23:15.646 "params": { 00:23:15.646 "bdev_io_pool_size": 65535, 00:23:15.646 "bdev_io_cache_size": 256, 00:23:15.646 "bdev_auto_examine": true, 00:23:15.646 "iobuf_small_cache_size": 128, 00:23:15.646 "iobuf_large_cache_size": 16 00:23:15.646 } 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "method": "bdev_raid_set_options", 00:23:15.646 "params": { 00:23:15.646 "process_window_size_kb": 1024, 00:23:15.646 "process_max_bandwidth_mb_sec": 0 00:23:15.646 } 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "method": "bdev_iscsi_set_options", 00:23:15.646 "params": { 00:23:15.646 "timeout_sec": 30 00:23:15.646 } 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "method": "bdev_nvme_set_options", 00:23:15.646 "params": { 00:23:15.646 "action_on_timeout": "none", 00:23:15.646 "timeout_us": 0, 00:23:15.646 "timeout_admin_us": 0, 00:23:15.646 "keep_alive_timeout_ms": 10000, 00:23:15.646 "arbitration_burst": 0, 00:23:15.646 "low_priority_weight": 0, 00:23:15.646 "medium_priority_weight": 0, 00:23:15.646 "high_priority_weight": 0, 00:23:15.646 "nvme_adminq_poll_period_us": 10000, 00:23:15.646 "nvme_ioq_poll_period_us": 0, 00:23:15.646 "io_queue_requests": 512, 00:23:15.646 "delay_cmd_submit": true, 00:23:15.646 "transport_retry_count": 4, 00:23:15.646 "bdev_retry_count": 3, 00:23:15.646 "transport_ack_timeout": 0, 00:23:15.646 "ctrlr_loss_timeout_sec": 0, 00:23:15.646 "reconnect_delay_sec": 0, 00:23:15.646 "fast_io_fail_timeout_sec": 0, 00:23:15.646 "disable_auto_failback": false, 00:23:15.646 "generate_uuids": false, 00:23:15.646 "transport_tos": 0, 00:23:15.646 "nvme_error_stat": false, 00:23:15.646 "rdma_srq_size": 0, 00:23:15.646 "io_path_stat": false, 00:23:15.646 "allow_accel_sequence": false, 00:23:15.646 "rdma_max_cq_size": 0, 00:23:15.646 "rdma_cm_event_timeout_ms": 0, 00:23:15.646 "dhchap_digests": [ 00:23:15.646 "sha256", 00:23:15.646 "sha384", 00:23:15.646 "sha512" 00:23:15.646 ], 00:23:15.646 "dhchap_dhgroups": [ 00:23:15.646 "null", 00:23:15.646 "ffdhe2048", 00:23:15.646 "ffdhe3072", 00:23:15.646 "ffdhe4096", 00:23:15.646 "ffdhe6144", 00:23:15.646 "ffdhe8192" 00:23:15.646 ] 00:23:15.646 } 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "method": "bdev_nvme_attach_controller", 00:23:15.646 "params": { 00:23:15.646 "name": "TLSTEST", 00:23:15.646 "trtype": "TCP", 00:23:15.646 "adrfam": "IPv4", 00:23:15.646 "traddr": "10.0.0.2", 00:23:15.646 "trsvcid": "4420", 00:23:15.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.646 "prchk_reftag": false, 00:23:15.646 "prchk_guard": false, 00:23:15.646 "ctrlr_loss_timeout_sec": 0, 00:23:15.646 "reconnect_delay_sec": 0, 00:23:15.646 "fast_io_fail_timeout_sec": 0, 00:23:15.646 "psk": "key0", 00:23:15.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.646 "hdgst": false, 00:23:15.646 "ddgst": false, 00:23:15.646 "multipath": "multipath" 00:23:15.646 } 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "method": "bdev_nvme_set_hotplug", 00:23:15.646 "params": { 00:23:15.646 "period_us": 100000, 00:23:15.646 "enable": false 00:23:15.646 } 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "method": "bdev_wait_for_examine" 00:23:15.646 } 00:23:15.646 ] 00:23:15.646 }, 00:23:15.646 { 00:23:15.646 "subsystem": "nbd", 00:23:15.646 "config": [] 00:23:15.646 } 00:23:15.646 ] 00:23:15.646 }' 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 946115 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 946115 ']' 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 946115 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946115 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946115' 00:23:15.646 killing process with pid 946115 00:23:15.646 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 946115 00:23:15.647 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.647 00:23:15.647 Latency(us) 00:23:15.647 [2024-11-29T12:07:18.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.647 [2024-11-29T12:07:18.327Z] =================================================================================================================== 00:23:15.647 [2024-11-29T12:07:18.327Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 946115 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 945729 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 945729 ']' 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 945729 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 945729 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 945729' 00:23:15.647 killing process with pid 945729 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 945729 00:23:15.647 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 945729 00:23:15.909 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:15.909 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.909 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.909 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.909 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:15.909 "subsystems": [ 00:23:15.909 { 00:23:15.909 "subsystem": "keyring", 00:23:15.909 "config": [ 00:23:15.909 { 00:23:15.909 "method": "keyring_file_add_key", 00:23:15.909 "params": { 00:23:15.909 "name": "key0", 00:23:15.909 "path": "/tmp/tmp.rkkm9JK3VS" 00:23:15.909 } 00:23:15.909 } 00:23:15.909 ] 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "subsystem": "iobuf", 00:23:15.909 "config": [ 00:23:15.909 { 00:23:15.909 "method": "iobuf_set_options", 00:23:15.909 "params": { 00:23:15.909 "small_pool_count": 8192, 00:23:15.909 "large_pool_count": 1024, 00:23:15.909 "small_bufsize": 8192, 00:23:15.909 "large_bufsize": 135168, 00:23:15.909 "enable_numa": false 00:23:15.909 } 00:23:15.909 } 00:23:15.909 ] 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "subsystem": "sock", 00:23:15.909 "config": [ 00:23:15.909 { 00:23:15.909 "method": "sock_set_default_impl", 00:23:15.909 "params": { 00:23:15.909 "impl_name": "posix" 00:23:15.909 } 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "method": "sock_impl_set_options", 00:23:15.909 "params": { 00:23:15.909 "impl_name": "ssl", 00:23:15.909 "recv_buf_size": 4096, 00:23:15.909 "send_buf_size": 4096, 00:23:15.909 "enable_recv_pipe": true, 00:23:15.909 "enable_quickack": false, 00:23:15.909 "enable_placement_id": 0, 00:23:15.909 "enable_zerocopy_send_server": true, 00:23:15.909 "enable_zerocopy_send_client": false, 00:23:15.909 "zerocopy_threshold": 0, 00:23:15.909 "tls_version": 0, 00:23:15.909 "enable_ktls": false 00:23:15.909 } 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "method": "sock_impl_set_options", 00:23:15.909 "params": { 00:23:15.909 "impl_name": "posix", 00:23:15.909 "recv_buf_size": 2097152, 00:23:15.909 "send_buf_size": 2097152, 00:23:15.909 "enable_recv_pipe": true, 00:23:15.909 "enable_quickack": false, 00:23:15.909 "enable_placement_id": 0, 00:23:15.909 "enable_zerocopy_send_server": true, 00:23:15.909 "enable_zerocopy_send_client": false, 00:23:15.909 "zerocopy_threshold": 0, 00:23:15.909 "tls_version": 0, 00:23:15.909 "enable_ktls": false 00:23:15.909 } 00:23:15.909 } 00:23:15.909 ] 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "subsystem": "vmd", 00:23:15.909 "config": [] 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "subsystem": "accel", 00:23:15.909 "config": [ 00:23:15.909 { 00:23:15.909 "method": "accel_set_options", 00:23:15.909 "params": { 00:23:15.909 "small_cache_size": 128, 00:23:15.909 "large_cache_size": 16, 00:23:15.909 "task_count": 2048, 00:23:15.909 "sequence_count": 2048, 00:23:15.909 "buf_count": 2048 00:23:15.909 } 00:23:15.909 } 00:23:15.909 ] 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "subsystem": "bdev", 00:23:15.909 "config": [ 00:23:15.909 { 00:23:15.909 "method": "bdev_set_options", 00:23:15.909 "params": { 00:23:15.909 "bdev_io_pool_size": 65535, 00:23:15.909 "bdev_io_cache_size": 256, 00:23:15.909 "bdev_auto_examine": true, 00:23:15.909 "iobuf_small_cache_size": 128, 00:23:15.909 "iobuf_large_cache_size": 16 00:23:15.909 } 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "method": "bdev_raid_set_options", 00:23:15.909 "params": { 00:23:15.909 "process_window_size_kb": 1024, 00:23:15.909 "process_max_bandwidth_mb_sec": 0 00:23:15.909 } 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "method": "bdev_iscsi_set_options", 00:23:15.909 "params": { 00:23:15.909 "timeout_sec": 30 00:23:15.909 } 00:23:15.909 }, 00:23:15.909 { 00:23:15.909 "method": "bdev_nvme_set_options", 00:23:15.909 "params": { 00:23:15.909 "action_on_timeout": "none", 00:23:15.910 "timeout_us": 0, 00:23:15.910 "timeout_admin_us": 0, 00:23:15.910 "keep_alive_timeout_ms": 10000, 00:23:15.910 "arbitration_burst": 0, 00:23:15.910 "low_priority_weight": 0, 00:23:15.910 "medium_priority_weight": 0, 00:23:15.910 "high_priority_weight": 0, 00:23:15.910 "nvme_adminq_poll_period_us": 10000, 00:23:15.910 "nvme_ioq_poll_period_us": 0, 00:23:15.910 "io_queue_requests": 0, 00:23:15.910 "delay_cmd_submit": true, 00:23:15.910 "transport_retry_count": 4, 00:23:15.910 "bdev_retry_count": 3, 00:23:15.910 "transport_ack_timeout": 0, 00:23:15.910 "ctrlr_loss_timeout_sec": 0, 00:23:15.910 "reconnect_delay_sec": 0, 00:23:15.910 "fast_io_fail_timeout_sec": 0, 00:23:15.910 "disable_auto_failback": false, 00:23:15.910 "generate_uuids": false, 00:23:15.910 "transport_tos": 0, 00:23:15.910 "nvme_error_stat": false, 00:23:15.910 "rdma_srq_size": 0, 00:23:15.910 "io_path_stat": false, 00:23:15.910 "allow_accel_sequence": false, 00:23:15.910 "rdma_max_cq_size": 0, 00:23:15.910 "rdma_cm_event_timeout_ms": 0, 00:23:15.910 "dhchap_digests": [ 00:23:15.910 "sha256", 00:23:15.910 "sha384", 00:23:15.910 "sha512" 00:23:15.910 ], 00:23:15.910 "dhchap_dhgroups": [ 00:23:15.910 "null", 00:23:15.910 "ffdhe2048", 00:23:15.910 "ffdhe3072", 00:23:15.910 "ffdhe4096", 00:23:15.910 "ffdhe6144", 00:23:15.910 "ffdhe8192" 00:23:15.910 ] 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "bdev_nvme_set_hotplug", 00:23:15.910 "params": { 00:23:15.910 "period_us": 100000, 00:23:15.910 "enable": false 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "bdev_malloc_create", 00:23:15.910 "params": { 00:23:15.910 "name": "malloc0", 00:23:15.910 "num_blocks": 8192, 00:23:15.910 "block_size": 4096, 00:23:15.910 "physical_block_size": 4096, 00:23:15.910 "uuid": "0b339fc2-f684-4734-a07a-c6132260fcf2", 00:23:15.910 "optimal_io_boundary": 0, 00:23:15.910 "md_size": 0, 00:23:15.910 "dif_type": 0, 00:23:15.910 "dif_is_head_of_md": false, 00:23:15.910 "dif_pi_format": 0 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "bdev_wait_for_examine" 00:23:15.910 } 00:23:15.910 ] 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "subsystem": "nbd", 00:23:15.910 "config": [] 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "subsystem": "scheduler", 00:23:15.910 "config": [ 00:23:15.910 { 00:23:15.910 "method": "framework_set_scheduler", 00:23:15.910 "params": { 00:23:15.910 "name": "static" 00:23:15.910 } 00:23:15.910 } 00:23:15.910 ] 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "subsystem": "nvmf", 00:23:15.910 "config": [ 00:23:15.910 { 00:23:15.910 "method": "nvmf_set_config", 00:23:15.910 "params": { 00:23:15.910 "discovery_filter": "match_any", 00:23:15.910 "admin_cmd_passthru": { 00:23:15.910 "identify_ctrlr": false 00:23:15.910 }, 00:23:15.910 "dhchap_digests": [ 00:23:15.910 "sha256", 00:23:15.910 "sha384", 00:23:15.910 "sha512" 00:23:15.910 ], 00:23:15.910 "dhchap_dhgroups": [ 00:23:15.910 "null", 00:23:15.910 "ffdhe2048", 00:23:15.910 "ffdhe3072", 00:23:15.910 "ffdhe4096", 00:23:15.910 "ffdhe6144", 00:23:15.910 "ffdhe8192" 00:23:15.910 ] 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "nvmf_set_max_subsystems", 00:23:15.910 "params": { 00:23:15.910 "max_subsystems": 1024 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "nvmf_set_crdt", 00:23:15.910 "params": { 00:23:15.910 "crdt1": 0, 00:23:15.910 "crdt2": 0, 00:23:15.910 "crdt3": 0 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "nvmf_create_transport", 00:23:15.910 "params": { 00:23:15.910 "trtype": "TCP", 00:23:15.910 "max_queue_depth": 128, 00:23:15.910 "max_io_qpairs_per_ctrlr": 127, 00:23:15.910 "in_capsule_data_size": 4096, 00:23:15.910 "max_io_size": 131072, 00:23:15.910 "io_unit_size": 131072, 00:23:15.910 "max_aq_depth": 128, 00:23:15.910 "num_shared_buffers": 511, 00:23:15.910 "buf_cache_size": 4294967295, 00:23:15.910 "dif_insert_or_strip": false, 00:23:15.910 "zcopy": false, 00:23:15.910 "c2h_success": false, 00:23:15.910 "sock_priority": 0, 00:23:15.910 "abort_timeout_sec": 1, 00:23:15.910 "ack_timeout": 0, 00:23:15.910 "data_wr_pool_size": 0 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "nvmf_create_subsystem", 00:23:15.910 "params": { 00:23:15.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.910 "allow_any_host": false, 00:23:15.910 "serial_number": "SPDK00000000000001", 00:23:15.910 "model_number": "SPDK bdev Controller", 00:23:15.910 "max_namespaces": 10, 00:23:15.910 "min_cntlid": 1, 00:23:15.910 "max_cntlid": 65519, 00:23:15.910 "ana_reporting": false 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "nvmf_subsystem_add_host", 00:23:15.910 "params": { 00:23:15.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.910 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.910 "psk": "key0" 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "nvmf_subsystem_add_ns", 00:23:15.910 "params": { 00:23:15.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.910 "namespace": { 00:23:15.910 "nsid": 1, 00:23:15.910 "bdev_name": "malloc0", 00:23:15.910 "nguid": "0B339FC2F6844734A07AC6132260FCF2", 00:23:15.910 "uuid": "0b339fc2-f684-4734-a07a-c6132260fcf2", 00:23:15.910 "no_auto_visible": false 00:23:15.910 } 00:23:15.910 } 00:23:15.910 }, 00:23:15.910 { 00:23:15.910 "method": "nvmf_subsystem_add_listener", 00:23:15.910 "params": { 00:23:15.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.910 "listen_address": { 00:23:15.910 "trtype": "TCP", 00:23:15.910 "adrfam": "IPv4", 00:23:15.910 "traddr": "10.0.0.2", 00:23:15.910 "trsvcid": "4420" 00:23:15.910 }, 00:23:15.910 "secure_channel": true 00:23:15.910 } 00:23:15.910 } 00:23:15.910 ] 00:23:15.910 } 00:23:15.910 ] 00:23:15.910 }' 00:23:15.910 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=946657 00:23:15.910 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 946657 00:23:15.910 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:15.910 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 946657 ']' 00:23:15.910 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.910 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.910 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.910 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.910 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.910 [2024-11-29 13:07:18.481646] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:15.910 [2024-11-29 13:07:18.481704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.910 [2024-11-29 13:07:18.571179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.171 [2024-11-29 13:07:18.600821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.171 [2024-11-29 13:07:18.600850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.171 [2024-11-29 13:07:18.600855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.171 [2024-11-29 13:07:18.600860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.171 [2024-11-29 13:07:18.600864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.171 [2024-11-29 13:07:18.601339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.171 [2024-11-29 13:07:18.795217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.171 [2024-11-29 13:07:18.827251] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.171 [2024-11-29 13:07:18.827444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=946802 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 946802 /var/tmp/bdevperf.sock 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 946802 ']' 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.745 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:16.745 "subsystems": [ 00:23:16.745 { 00:23:16.745 "subsystem": "keyring", 00:23:16.745 "config": [ 00:23:16.745 { 00:23:16.745 "method": "keyring_file_add_key", 00:23:16.745 "params": { 00:23:16.745 "name": "key0", 00:23:16.745 "path": "/tmp/tmp.rkkm9JK3VS" 00:23:16.745 } 00:23:16.745 } 00:23:16.745 ] 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "subsystem": "iobuf", 00:23:16.745 "config": [ 00:23:16.745 { 00:23:16.745 "method": "iobuf_set_options", 00:23:16.745 "params": { 00:23:16.745 "small_pool_count": 8192, 00:23:16.745 "large_pool_count": 1024, 00:23:16.745 "small_bufsize": 8192, 00:23:16.745 "large_bufsize": 135168, 00:23:16.745 "enable_numa": false 00:23:16.745 } 00:23:16.745 } 00:23:16.745 ] 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "subsystem": "sock", 00:23:16.745 "config": [ 00:23:16.745 { 00:23:16.745 "method": "sock_set_default_impl", 00:23:16.745 "params": { 00:23:16.745 "impl_name": "posix" 00:23:16.745 } 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "method": "sock_impl_set_options", 00:23:16.745 "params": { 00:23:16.745 "impl_name": "ssl", 00:23:16.745 "recv_buf_size": 4096, 00:23:16.745 "send_buf_size": 4096, 00:23:16.745 "enable_recv_pipe": true, 00:23:16.745 "enable_quickack": false, 00:23:16.745 "enable_placement_id": 0, 00:23:16.745 "enable_zerocopy_send_server": true, 00:23:16.745 "enable_zerocopy_send_client": false, 00:23:16.745 "zerocopy_threshold": 0, 00:23:16.745 "tls_version": 0, 00:23:16.745 "enable_ktls": false 00:23:16.745 } 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "method": "sock_impl_set_options", 00:23:16.745 "params": { 00:23:16.745 "impl_name": "posix", 00:23:16.745 "recv_buf_size": 2097152, 00:23:16.745 "send_buf_size": 2097152, 00:23:16.745 "enable_recv_pipe": true, 00:23:16.745 "enable_quickack": false, 00:23:16.745 "enable_placement_id": 0, 00:23:16.745 "enable_zerocopy_send_server": true, 00:23:16.745 "enable_zerocopy_send_client": false, 00:23:16.745 "zerocopy_threshold": 0, 00:23:16.745 "tls_version": 0, 00:23:16.745 "enable_ktls": false 00:23:16.745 } 00:23:16.745 } 00:23:16.745 ] 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "subsystem": "vmd", 00:23:16.745 "config": [] 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "subsystem": "accel", 00:23:16.745 "config": [ 00:23:16.745 { 00:23:16.745 "method": "accel_set_options", 00:23:16.745 "params": { 00:23:16.745 "small_cache_size": 128, 00:23:16.745 "large_cache_size": 16, 00:23:16.745 "task_count": 2048, 00:23:16.745 "sequence_count": 2048, 00:23:16.745 "buf_count": 2048 00:23:16.745 } 00:23:16.745 } 00:23:16.745 ] 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "subsystem": "bdev", 00:23:16.745 "config": [ 00:23:16.745 { 00:23:16.745 "method": "bdev_set_options", 00:23:16.745 "params": { 00:23:16.745 "bdev_io_pool_size": 65535, 00:23:16.745 "bdev_io_cache_size": 256, 00:23:16.745 "bdev_auto_examine": true, 00:23:16.745 "iobuf_small_cache_size": 128, 00:23:16.745 "iobuf_large_cache_size": 16 00:23:16.745 } 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "method": "bdev_raid_set_options", 00:23:16.745 "params": { 00:23:16.745 "process_window_size_kb": 1024, 00:23:16.745 "process_max_bandwidth_mb_sec": 0 00:23:16.745 } 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "method": "bdev_iscsi_set_options", 00:23:16.745 "params": { 00:23:16.745 "timeout_sec": 30 00:23:16.745 } 00:23:16.745 }, 00:23:16.745 { 00:23:16.745 "method": "bdev_nvme_set_options", 00:23:16.745 "params": { 00:23:16.745 "action_on_timeout": "none", 00:23:16.745 "timeout_us": 0, 00:23:16.745 "timeout_admin_us": 0, 00:23:16.745 "keep_alive_timeout_ms": 10000, 00:23:16.745 "arbitration_burst": 0, 00:23:16.745 "low_priority_weight": 0, 00:23:16.745 "medium_priority_weight": 0, 00:23:16.745 "high_priority_weight": 0, 00:23:16.745 "nvme_adminq_poll_period_us": 10000, 00:23:16.745 "nvme_ioq_poll_period_us": 0, 00:23:16.745 "io_queue_requests": 512, 00:23:16.745 "delay_cmd_submit": true, 00:23:16.745 "transport_retry_count": 4, 00:23:16.746 "bdev_retry_count": 3, 00:23:16.746 "transport_ack_timeout": 0, 00:23:16.746 "ctrlr_loss_timeout_sec": 0, 00:23:16.746 "reconnect_delay_sec": 0, 00:23:16.746 "fast_io_fail_timeout_sec": 0, 00:23:16.746 "disable_auto_failback": false, 00:23:16.746 "generate_uuids": false, 00:23:16.746 "transport_tos": 0, 00:23:16.746 "nvme_error_stat": false, 00:23:16.746 "rdma_srq_size": 0, 00:23:16.746 "io_path_stat": false, 00:23:16.746 "allow_accel_sequence": false, 00:23:16.746 "rdma_max_cq_size": 0, 00:23:16.746 "rdma_cm_event_timeout_ms": 0, 00:23:16.746 "dhchap_digests": [ 00:23:16.746 "sha256", 00:23:16.746 "sha384", 00:23:16.746 "sha512" 00:23:16.746 ], 00:23:16.746 "dhchap_dhgroups": [ 00:23:16.746 "null", 00:23:16.746 "ffdhe2048", 00:23:16.746 "ffdhe3072", 00:23:16.746 "ffdhe4096", 00:23:16.746 "ffdhe6144", 00:23:16.746 "ffdhe8192" 00:23:16.746 ] 00:23:16.746 } 00:23:16.746 }, 00:23:16.746 { 00:23:16.746 "method": "bdev_nvme_attach_controller", 00:23:16.746 "params": { 00:23:16.746 "name": "TLSTEST", 00:23:16.746 "trtype": "TCP", 00:23:16.746 "adrfam": "IPv4", 00:23:16.746 "traddr": "10.0.0.2", 00:23:16.746 "trsvcid": "4420", 00:23:16.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.746 "prchk_reftag": false, 00:23:16.746 "prchk_guard": false, 00:23:16.746 "ctrlr_loss_timeout_sec": 0, 00:23:16.746 "reconnect_delay_sec": 0, 00:23:16.746 "fast_io_fail_timeout_sec": 0, 00:23:16.746 "psk": "key0", 00:23:16.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.746 "hdgst": false, 00:23:16.746 "ddgst": false, 00:23:16.746 "multipath": "multipath" 00:23:16.746 } 00:23:16.746 }, 00:23:16.746 { 00:23:16.746 "method": "bdev_nvme_set_hotplug", 00:23:16.746 "params": { 00:23:16.746 "period_us": 100000, 00:23:16.746 "enable": false 00:23:16.746 } 00:23:16.746 }, 00:23:16.746 { 00:23:16.746 "method": "bdev_wait_for_examine" 00:23:16.746 } 00:23:16.746 ] 00:23:16.746 }, 00:23:16.746 { 00:23:16.746 "subsystem": "nbd", 00:23:16.746 "config": [] 00:23:16.746 } 00:23:16.746 ] 00:23:16.746 }' 00:23:16.746 [2024-11-29 13:07:19.357806] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:16.746 [2024-11-29 13:07:19.357860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946802 ] 00:23:17.007 [2024-11-29 13:07:19.442745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.007 [2024-11-29 13:07:19.471813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.007 [2024-11-29 13:07:19.606599] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.578 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.578 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:17.578 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.578 Running I/O for 10 seconds... 00:23:19.905 4910.00 IOPS, 19.18 MiB/s [2024-11-29T12:07:23.526Z] 5097.50 IOPS, 19.91 MiB/s [2024-11-29T12:07:24.468Z] 5414.00 IOPS, 21.15 MiB/s [2024-11-29T12:07:25.420Z] 5644.50 IOPS, 22.05 MiB/s [2024-11-29T12:07:26.363Z] 5663.00 IOPS, 22.12 MiB/s [2024-11-29T12:07:27.307Z] 5648.00 IOPS, 22.06 MiB/s [2024-11-29T12:07:28.691Z] 5573.29 IOPS, 21.77 MiB/s [2024-11-29T12:07:29.302Z] 5677.50 IOPS, 22.18 MiB/s [2024-11-29T12:07:30.287Z] 5570.11 IOPS, 21.76 MiB/s [2024-11-29T12:07:30.547Z] 5553.30 IOPS, 21.69 MiB/s 00:23:27.867 Latency(us) 00:23:27.867 [2024-11-29T12:07:30.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.867 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.867 Verification LBA range: start 0x0 length 0x2000 00:23:27.867 TLSTESTn1 : 10.02 5554.29 21.70 0.00 0.00 23009.34 5188.27 29491.20 00:23:27.867 [2024-11-29T12:07:30.547Z] =================================================================================================================== 00:23:27.867 [2024-11-29T12:07:30.548Z] Total : 5554.29 21.70 0.00 0.00 23009.34 5188.27 29491.20 00:23:27.868 { 00:23:27.868 "results": [ 00:23:27.868 { 00:23:27.868 "job": "TLSTESTn1", 00:23:27.868 "core_mask": "0x4", 00:23:27.868 "workload": "verify", 00:23:27.868 "status": "finished", 00:23:27.868 "verify_range": { 00:23:27.868 "start": 0, 00:23:27.868 "length": 8192 00:23:27.868 }, 00:23:27.868 "queue_depth": 128, 00:23:27.868 "io_size": 4096, 00:23:27.868 "runtime": 10.02108, 00:23:27.868 "iops": 5554.291553405421, 00:23:27.868 "mibps": 21.696451380489925, 00:23:27.868 "io_failed": 0, 00:23:27.868 "io_timeout": 0, 00:23:27.868 "avg_latency_us": 23009.33666403162, 00:23:27.868 "min_latency_us": 5188.266666666666, 00:23:27.868 "max_latency_us": 29491.2 00:23:27.868 } 00:23:27.868 ], 00:23:27.868 "core_count": 1 00:23:27.868 } 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 946802 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 946802 ']' 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 946802 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946802 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946802' 00:23:27.868 killing process with pid 946802 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 946802 00:23:27.868 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.868 00:23:27.868 Latency(us) 00:23:27.868 [2024-11-29T12:07:30.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.868 [2024-11-29T12:07:30.548Z] =================================================================================================================== 00:23:27.868 [2024-11-29T12:07:30.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 946802 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 946657 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 946657 ']' 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 946657 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.868 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946657 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946657' 00:23:28.129 killing process with pid 946657 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 946657 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 946657 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=949090 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 949090 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 949090 ']' 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.129 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.129 [2024-11-29 13:07:30.731930] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:28.129 [2024-11-29 13:07:30.731987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.391 [2024-11-29 13:07:30.827969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.391 [2024-11-29 13:07:30.874178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.391 [2024-11-29 13:07:30.874252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.391 [2024-11-29 13:07:30.874261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.391 [2024-11-29 13:07:30.874269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.391 [2024-11-29 13:07:30.874275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.391 [2024-11-29 13:07:30.875024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.962 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.962 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:28.962 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.962 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.962 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.962 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.962 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.rkkm9JK3VS 00:23:28.962 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rkkm9JK3VS 00:23:28.962 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.224 [2024-11-29 13:07:31.754245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.224 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:29.485 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:29.485 [2024-11-29 13:07:32.119176] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.485 [2024-11-29 13:07:32.119537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.485 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:29.745 malloc0 00:23:29.745 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.004 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rkkm9JK3VS 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=949514 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 949514 /var/tmp/bdevperf.sock 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 949514 ']' 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.265 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.265 [2024-11-29 13:07:32.911513] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:30.265 [2024-11-29 13:07:32.911585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949514 ] 00:23:30.527 [2024-11-29 13:07:32.997961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.527 [2024-11-29 13:07:33.032266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.098 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.098 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:31.098 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rkkm9JK3VS 00:23:31.358 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:31.358 [2024-11-29 13:07:34.011265] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.619 nvme0n1 00:23:31.619 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.619 Running I/O for 1 seconds... 00:23:32.559 5771.00 IOPS, 22.54 MiB/s 00:23:32.559 Latency(us) 00:23:32.559 [2024-11-29T12:07:35.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.559 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:32.559 Verification LBA range: start 0x0 length 0x2000 00:23:32.559 nvme0n1 : 1.02 5810.79 22.70 0.00 0.00 21879.19 6225.92 39103.15 00:23:32.559 [2024-11-29T12:07:35.239Z] =================================================================================================================== 00:23:32.559 [2024-11-29T12:07:35.239Z] Total : 5810.79 22.70 0.00 0.00 21879.19 6225.92 39103.15 00:23:32.559 { 00:23:32.559 "results": [ 00:23:32.559 { 00:23:32.559 "job": "nvme0n1", 00:23:32.559 "core_mask": "0x2", 00:23:32.559 "workload": "verify", 00:23:32.559 "status": "finished", 00:23:32.559 "verify_range": { 00:23:32.559 "start": 0, 00:23:32.559 "length": 8192 00:23:32.559 }, 00:23:32.559 "queue_depth": 128, 00:23:32.559 "io_size": 4096, 00:23:32.559 "runtime": 1.015352, 00:23:32.559 "iops": 5810.792710311301, 00:23:32.559 "mibps": 22.698409024653518, 00:23:32.559 "io_failed": 0, 00:23:32.559 "io_timeout": 0, 00:23:32.559 "avg_latency_us": 21879.191285875706, 00:23:32.559 "min_latency_us": 6225.92, 00:23:32.559 "max_latency_us": 39103.14666666667 00:23:32.559 } 00:23:32.559 ], 00:23:32.559 "core_count": 1 00:23:32.559 } 00:23:32.559 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 949514 00:23:32.559 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 949514 ']' 00:23:32.559 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 949514 00:23:32.559 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.559 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.559 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949514 00:23:32.819 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949514' 00:23:32.820 killing process with pid 949514 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 949514 00:23:32.820 Received shutdown signal, test time was about 1.000000 seconds 00:23:32.820 00:23:32.820 Latency(us) 00:23:32.820 [2024-11-29T12:07:35.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.820 [2024-11-29T12:07:35.500Z] =================================================================================================================== 00:23:32.820 [2024-11-29T12:07:35.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 949514 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 949090 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 949090 ']' 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 949090 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949090 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949090' 00:23:32.820 killing process with pid 949090 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 949090 00:23:32.820 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 949090 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=949968 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 949968 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 949968 ']' 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.081 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.081 [2024-11-29 13:07:35.623814] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:33.082 [2024-11-29 13:07:35.623878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.082 [2024-11-29 13:07:35.720607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.342 [2024-11-29 13:07:35.772003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.342 [2024-11-29 13:07:35.772057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.342 [2024-11-29 13:07:35.772065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.342 [2024-11-29 13:07:35.772073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.342 [2024-11-29 13:07:35.772079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.342 [2024-11-29 13:07:35.772813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.914 [2024-11-29 13:07:36.462147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.914 malloc0 00:23:33.914 [2024-11-29 13:07:36.492600] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.914 [2024-11-29 13:07:36.492927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=950218 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 950218 /var/tmp/bdevperf.sock 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 950218 ']' 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.914 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.914 [2024-11-29 13:07:36.582962] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:33.914 [2024-11-29 13:07:36.583023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950218 ] 00:23:34.174 [2024-11-29 13:07:36.669363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.174 [2024-11-29 13:07:36.703664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.746 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.746 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.746 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rkkm9JK3VS 00:23:35.006 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:35.268 [2024-11-29 13:07:37.686451] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.268 nvme0n1 00:23:35.268 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.268 Running I/O for 1 seconds... 00:23:36.468 5027.00 IOPS, 19.64 MiB/s 00:23:36.468 Latency(us) 00:23:36.468 [2024-11-29T12:07:39.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.468 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:36.468 Verification LBA range: start 0x0 length 0x2000 00:23:36.469 nvme0n1 : 1.03 5015.80 19.59 0.00 0.00 25309.14 4614.83 31020.37 00:23:36.469 [2024-11-29T12:07:39.149Z] =================================================================================================================== 00:23:36.469 [2024-11-29T12:07:39.149Z] Total : 5015.80 19.59 0.00 0.00 25309.14 4614.83 31020.37 00:23:36.469 { 00:23:36.469 "results": [ 00:23:36.469 { 00:23:36.469 "job": "nvme0n1", 00:23:36.469 "core_mask": "0x2", 00:23:36.469 "workload": "verify", 00:23:36.469 "status": "finished", 00:23:36.469 "verify_range": { 00:23:36.469 "start": 0, 00:23:36.469 "length": 8192 00:23:36.469 }, 00:23:36.469 "queue_depth": 128, 00:23:36.469 "io_size": 4096, 00:23:36.469 "runtime": 1.027752, 00:23:36.469 "iops": 5015.801477399217, 00:23:36.469 "mibps": 19.592974521090692, 00:23:36.469 "io_failed": 0, 00:23:36.469 "io_timeout": 0, 00:23:36.469 "avg_latency_us": 25309.14029873909, 00:23:36.469 "min_latency_us": 4614.826666666667, 00:23:36.469 "max_latency_us": 31020.373333333333 00:23:36.469 } 00:23:36.469 ], 00:23:36.469 "core_count": 1 00:23:36.469 } 00:23:36.469 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:36.469 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.469 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.469 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.469 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:36.469 "subsystems": [ 00:23:36.469 { 00:23:36.469 "subsystem": "keyring", 00:23:36.469 "config": [ 00:23:36.469 { 00:23:36.469 "method": "keyring_file_add_key", 00:23:36.469 "params": { 00:23:36.469 "name": "key0", 00:23:36.469 "path": "/tmp/tmp.rkkm9JK3VS" 00:23:36.469 } 00:23:36.469 } 00:23:36.469 ] 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "subsystem": "iobuf", 00:23:36.469 "config": [ 00:23:36.469 { 00:23:36.469 "method": "iobuf_set_options", 00:23:36.469 "params": { 00:23:36.469 "small_pool_count": 8192, 00:23:36.469 "large_pool_count": 1024, 00:23:36.469 "small_bufsize": 8192, 00:23:36.469 "large_bufsize": 135168, 00:23:36.469 "enable_numa": false 00:23:36.469 } 00:23:36.469 } 00:23:36.469 ] 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "subsystem": "sock", 00:23:36.469 "config": [ 00:23:36.469 { 00:23:36.469 "method": "sock_set_default_impl", 00:23:36.469 "params": { 00:23:36.469 "impl_name": "posix" 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "sock_impl_set_options", 00:23:36.469 "params": { 00:23:36.469 "impl_name": "ssl", 00:23:36.469 "recv_buf_size": 4096, 00:23:36.469 "send_buf_size": 4096, 00:23:36.469 "enable_recv_pipe": true, 00:23:36.469 "enable_quickack": false, 00:23:36.469 "enable_placement_id": 0, 00:23:36.469 "enable_zerocopy_send_server": true, 00:23:36.469 "enable_zerocopy_send_client": false, 00:23:36.469 "zerocopy_threshold": 0, 00:23:36.469 "tls_version": 0, 00:23:36.469 "enable_ktls": false 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "sock_impl_set_options", 00:23:36.469 "params": { 00:23:36.469 "impl_name": "posix", 00:23:36.469 "recv_buf_size": 2097152, 00:23:36.469 "send_buf_size": 2097152, 00:23:36.469 "enable_recv_pipe": true, 00:23:36.469 "enable_quickack": false, 00:23:36.469 "enable_placement_id": 0, 00:23:36.469 "enable_zerocopy_send_server": true, 00:23:36.469 "enable_zerocopy_send_client": false, 00:23:36.469 "zerocopy_threshold": 0, 00:23:36.469 "tls_version": 0, 00:23:36.469 "enable_ktls": false 00:23:36.469 } 00:23:36.469 } 00:23:36.469 ] 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "subsystem": "vmd", 00:23:36.469 "config": [] 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "subsystem": "accel", 00:23:36.469 "config": [ 00:23:36.469 { 00:23:36.469 "method": "accel_set_options", 00:23:36.469 "params": { 00:23:36.469 "small_cache_size": 128, 00:23:36.469 "large_cache_size": 16, 00:23:36.469 "task_count": 2048, 00:23:36.469 "sequence_count": 2048, 00:23:36.469 "buf_count": 2048 00:23:36.469 } 00:23:36.469 } 00:23:36.469 ] 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "subsystem": "bdev", 00:23:36.469 "config": [ 00:23:36.469 { 00:23:36.469 "method": "bdev_set_options", 00:23:36.469 "params": { 00:23:36.469 "bdev_io_pool_size": 65535, 00:23:36.469 "bdev_io_cache_size": 256, 00:23:36.469 "bdev_auto_examine": true, 00:23:36.469 "iobuf_small_cache_size": 128, 00:23:36.469 "iobuf_large_cache_size": 16 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "bdev_raid_set_options", 00:23:36.469 "params": { 00:23:36.469 "process_window_size_kb": 1024, 00:23:36.469 "process_max_bandwidth_mb_sec": 0 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "bdev_iscsi_set_options", 00:23:36.469 "params": { 00:23:36.469 "timeout_sec": 30 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "bdev_nvme_set_options", 00:23:36.469 "params": { 00:23:36.469 "action_on_timeout": "none", 00:23:36.469 "timeout_us": 0, 00:23:36.469 "timeout_admin_us": 0, 00:23:36.469 "keep_alive_timeout_ms": 10000, 00:23:36.469 "arbitration_burst": 0, 00:23:36.469 "low_priority_weight": 0, 00:23:36.469 "medium_priority_weight": 0, 00:23:36.469 "high_priority_weight": 0, 00:23:36.469 "nvme_adminq_poll_period_us": 10000, 00:23:36.469 "nvme_ioq_poll_period_us": 0, 00:23:36.469 "io_queue_requests": 0, 00:23:36.469 "delay_cmd_submit": true, 00:23:36.469 "transport_retry_count": 4, 00:23:36.469 "bdev_retry_count": 3, 00:23:36.469 "transport_ack_timeout": 0, 00:23:36.469 "ctrlr_loss_timeout_sec": 0, 00:23:36.469 "reconnect_delay_sec": 0, 00:23:36.469 "fast_io_fail_timeout_sec": 0, 00:23:36.469 "disable_auto_failback": false, 00:23:36.469 "generate_uuids": false, 00:23:36.469 "transport_tos": 0, 00:23:36.469 "nvme_error_stat": false, 00:23:36.469 "rdma_srq_size": 0, 00:23:36.469 "io_path_stat": false, 00:23:36.469 "allow_accel_sequence": false, 00:23:36.469 "rdma_max_cq_size": 0, 00:23:36.469 "rdma_cm_event_timeout_ms": 0, 00:23:36.469 "dhchap_digests": [ 00:23:36.469 "sha256", 00:23:36.469 "sha384", 00:23:36.469 "sha512" 00:23:36.469 ], 00:23:36.469 "dhchap_dhgroups": [ 00:23:36.469 "null", 00:23:36.469 "ffdhe2048", 00:23:36.469 "ffdhe3072", 00:23:36.469 "ffdhe4096", 00:23:36.469 "ffdhe6144", 00:23:36.469 "ffdhe8192" 00:23:36.469 ] 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "bdev_nvme_set_hotplug", 00:23:36.469 "params": { 00:23:36.469 "period_us": 100000, 00:23:36.469 "enable": false 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "bdev_malloc_create", 00:23:36.469 "params": { 00:23:36.469 "name": "malloc0", 00:23:36.469 "num_blocks": 8192, 00:23:36.469 "block_size": 4096, 00:23:36.469 "physical_block_size": 4096, 00:23:36.469 "uuid": "026452b9-6e86-4eac-a1c5-ea66ea57375b", 00:23:36.469 "optimal_io_boundary": 0, 00:23:36.469 "md_size": 0, 00:23:36.469 "dif_type": 0, 00:23:36.469 "dif_is_head_of_md": false, 00:23:36.469 "dif_pi_format": 0 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "bdev_wait_for_examine" 00:23:36.469 } 00:23:36.469 ] 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "subsystem": "nbd", 00:23:36.469 "config": [] 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "subsystem": "scheduler", 00:23:36.469 "config": [ 00:23:36.469 { 00:23:36.469 "method": "framework_set_scheduler", 00:23:36.469 "params": { 00:23:36.469 "name": "static" 00:23:36.469 } 00:23:36.469 } 00:23:36.469 ] 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "subsystem": "nvmf", 00:23:36.469 "config": [ 00:23:36.469 { 00:23:36.469 "method": "nvmf_set_config", 00:23:36.469 "params": { 00:23:36.469 "discovery_filter": "match_any", 00:23:36.469 "admin_cmd_passthru": { 00:23:36.469 "identify_ctrlr": false 00:23:36.469 }, 00:23:36.469 "dhchap_digests": [ 00:23:36.469 "sha256", 00:23:36.469 "sha384", 00:23:36.469 "sha512" 00:23:36.469 ], 00:23:36.469 "dhchap_dhgroups": [ 00:23:36.469 "null", 00:23:36.469 "ffdhe2048", 00:23:36.469 "ffdhe3072", 00:23:36.469 "ffdhe4096", 00:23:36.469 "ffdhe6144", 00:23:36.469 "ffdhe8192" 00:23:36.469 ] 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "nvmf_set_max_subsystems", 00:23:36.469 "params": { 00:23:36.469 "max_subsystems": 1024 00:23:36.469 } 00:23:36.469 }, 00:23:36.469 { 00:23:36.469 "method": "nvmf_set_crdt", 00:23:36.469 "params": { 00:23:36.469 "crdt1": 0, 00:23:36.469 "crdt2": 0, 00:23:36.470 "crdt3": 0 00:23:36.470 } 00:23:36.470 }, 00:23:36.470 { 00:23:36.470 "method": "nvmf_create_transport", 00:23:36.470 "params": { 00:23:36.470 "trtype": "TCP", 00:23:36.470 "max_queue_depth": 128, 00:23:36.470 "max_io_qpairs_per_ctrlr": 127, 00:23:36.470 "in_capsule_data_size": 4096, 00:23:36.470 "max_io_size": 131072, 00:23:36.470 "io_unit_size": 131072, 00:23:36.470 "max_aq_depth": 128, 00:23:36.470 "num_shared_buffers": 511, 00:23:36.470 "buf_cache_size": 4294967295, 00:23:36.470 "dif_insert_or_strip": false, 00:23:36.470 "zcopy": false, 00:23:36.470 "c2h_success": false, 00:23:36.470 "sock_priority": 0, 00:23:36.470 "abort_timeout_sec": 1, 00:23:36.470 "ack_timeout": 0, 00:23:36.470 "data_wr_pool_size": 0 00:23:36.470 } 00:23:36.470 }, 00:23:36.470 { 00:23:36.470 "method": "nvmf_create_subsystem", 00:23:36.470 "params": { 00:23:36.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.470 "allow_any_host": false, 00:23:36.470 "serial_number": "00000000000000000000", 00:23:36.470 "model_number": "SPDK bdev Controller", 00:23:36.470 "max_namespaces": 32, 00:23:36.470 "min_cntlid": 1, 00:23:36.470 "max_cntlid": 65519, 00:23:36.470 "ana_reporting": false 00:23:36.470 } 00:23:36.470 }, 00:23:36.470 { 00:23:36.470 "method": "nvmf_subsystem_add_host", 00:23:36.470 "params": { 00:23:36.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.470 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.470 "psk": "key0" 00:23:36.470 } 00:23:36.470 }, 00:23:36.470 { 00:23:36.470 "method": "nvmf_subsystem_add_ns", 00:23:36.470 "params": { 00:23:36.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.470 "namespace": { 00:23:36.470 "nsid": 1, 00:23:36.470 "bdev_name": "malloc0", 00:23:36.470 "nguid": "026452B96E864EACA1C5EA66EA57375B", 00:23:36.470 "uuid": "026452b9-6e86-4eac-a1c5-ea66ea57375b", 00:23:36.470 "no_auto_visible": false 00:23:36.470 } 00:23:36.470 } 00:23:36.470 }, 00:23:36.470 { 00:23:36.470 "method": "nvmf_subsystem_add_listener", 00:23:36.470 "params": { 00:23:36.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.470 "listen_address": { 00:23:36.470 "trtype": "TCP", 00:23:36.470 "adrfam": "IPv4", 00:23:36.470 "traddr": "10.0.0.2", 00:23:36.470 "trsvcid": "4420" 00:23:36.470 }, 00:23:36.470 "secure_channel": false, 00:23:36.470 "sock_impl": "ssl" 00:23:36.470 } 00:23:36.470 } 00:23:36.470 ] 00:23:36.470 } 00:23:36.470 ] 00:23:36.470 }' 00:23:36.470 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:36.732 "subsystems": [ 00:23:36.732 { 00:23:36.732 "subsystem": "keyring", 00:23:36.732 "config": [ 00:23:36.732 { 00:23:36.732 "method": "keyring_file_add_key", 00:23:36.732 "params": { 00:23:36.732 "name": "key0", 00:23:36.732 "path": "/tmp/tmp.rkkm9JK3VS" 00:23:36.732 } 00:23:36.732 } 00:23:36.732 ] 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "subsystem": "iobuf", 00:23:36.732 "config": [ 00:23:36.732 { 00:23:36.732 "method": "iobuf_set_options", 00:23:36.732 "params": { 00:23:36.732 "small_pool_count": 8192, 00:23:36.732 "large_pool_count": 1024, 00:23:36.732 "small_bufsize": 8192, 00:23:36.732 "large_bufsize": 135168, 00:23:36.732 "enable_numa": false 00:23:36.732 } 00:23:36.732 } 00:23:36.732 ] 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "subsystem": "sock", 00:23:36.732 "config": [ 00:23:36.732 { 00:23:36.732 "method": "sock_set_default_impl", 00:23:36.732 "params": { 00:23:36.732 "impl_name": "posix" 00:23:36.732 } 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "method": "sock_impl_set_options", 00:23:36.732 "params": { 00:23:36.732 "impl_name": "ssl", 00:23:36.732 "recv_buf_size": 4096, 00:23:36.732 "send_buf_size": 4096, 00:23:36.732 "enable_recv_pipe": true, 00:23:36.732 "enable_quickack": false, 00:23:36.732 "enable_placement_id": 0, 00:23:36.732 "enable_zerocopy_send_server": true, 00:23:36.732 "enable_zerocopy_send_client": false, 00:23:36.732 "zerocopy_threshold": 0, 00:23:36.732 "tls_version": 0, 00:23:36.732 "enable_ktls": false 00:23:36.732 } 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "method": "sock_impl_set_options", 00:23:36.732 "params": { 00:23:36.732 "impl_name": "posix", 00:23:36.732 "recv_buf_size": 2097152, 00:23:36.732 "send_buf_size": 2097152, 00:23:36.732 "enable_recv_pipe": true, 00:23:36.732 "enable_quickack": false, 00:23:36.732 "enable_placement_id": 0, 00:23:36.732 "enable_zerocopy_send_server": true, 00:23:36.732 "enable_zerocopy_send_client": false, 00:23:36.732 "zerocopy_threshold": 0, 00:23:36.732 "tls_version": 0, 00:23:36.732 "enable_ktls": false 00:23:36.732 } 00:23:36.732 } 00:23:36.732 ] 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "subsystem": "vmd", 00:23:36.732 "config": [] 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "subsystem": "accel", 00:23:36.732 "config": [ 00:23:36.732 { 00:23:36.732 "method": "accel_set_options", 00:23:36.732 "params": { 00:23:36.732 "small_cache_size": 128, 00:23:36.732 "large_cache_size": 16, 00:23:36.732 "task_count": 2048, 00:23:36.732 "sequence_count": 2048, 00:23:36.732 "buf_count": 2048 00:23:36.732 } 00:23:36.732 } 00:23:36.732 ] 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "subsystem": "bdev", 00:23:36.732 "config": [ 00:23:36.732 { 00:23:36.732 "method": "bdev_set_options", 00:23:36.732 "params": { 00:23:36.732 "bdev_io_pool_size": 65535, 00:23:36.732 "bdev_io_cache_size": 256, 00:23:36.732 "bdev_auto_examine": true, 00:23:36.732 "iobuf_small_cache_size": 128, 00:23:36.732 "iobuf_large_cache_size": 16 00:23:36.732 } 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "method": "bdev_raid_set_options", 00:23:36.732 "params": { 00:23:36.732 "process_window_size_kb": 1024, 00:23:36.732 "process_max_bandwidth_mb_sec": 0 00:23:36.732 } 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "method": "bdev_iscsi_set_options", 00:23:36.732 "params": { 00:23:36.732 "timeout_sec": 30 00:23:36.732 } 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "method": "bdev_nvme_set_options", 00:23:36.732 "params": { 00:23:36.732 "action_on_timeout": "none", 00:23:36.732 "timeout_us": 0, 00:23:36.732 "timeout_admin_us": 0, 00:23:36.732 "keep_alive_timeout_ms": 10000, 00:23:36.732 "arbitration_burst": 0, 00:23:36.732 "low_priority_weight": 0, 00:23:36.732 "medium_priority_weight": 0, 00:23:36.732 "high_priority_weight": 0, 00:23:36.732 "nvme_adminq_poll_period_us": 10000, 00:23:36.732 "nvme_ioq_poll_period_us": 0, 00:23:36.732 "io_queue_requests": 512, 00:23:36.732 "delay_cmd_submit": true, 00:23:36.732 "transport_retry_count": 4, 00:23:36.732 "bdev_retry_count": 3, 00:23:36.732 "transport_ack_timeout": 0, 00:23:36.732 "ctrlr_loss_timeout_sec": 0, 00:23:36.732 "reconnect_delay_sec": 0, 00:23:36.732 "fast_io_fail_timeout_sec": 0, 00:23:36.732 "disable_auto_failback": false, 00:23:36.732 "generate_uuids": false, 00:23:36.732 "transport_tos": 0, 00:23:36.732 "nvme_error_stat": false, 00:23:36.732 "rdma_srq_size": 0, 00:23:36.732 "io_path_stat": false, 00:23:36.732 "allow_accel_sequence": false, 00:23:36.732 "rdma_max_cq_size": 0, 00:23:36.732 "rdma_cm_event_timeout_ms": 0, 00:23:36.732 "dhchap_digests": [ 00:23:36.732 "sha256", 00:23:36.732 "sha384", 00:23:36.732 "sha512" 00:23:36.732 ], 00:23:36.732 "dhchap_dhgroups": [ 00:23:36.732 "null", 00:23:36.732 "ffdhe2048", 00:23:36.732 "ffdhe3072", 00:23:36.732 "ffdhe4096", 00:23:36.732 "ffdhe6144", 00:23:36.732 "ffdhe8192" 00:23:36.732 ] 00:23:36.732 } 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "method": "bdev_nvme_attach_controller", 00:23:36.732 "params": { 00:23:36.732 "name": "nvme0", 00:23:36.732 "trtype": "TCP", 00:23:36.732 "adrfam": "IPv4", 00:23:36.732 "traddr": "10.0.0.2", 00:23:36.732 "trsvcid": "4420", 00:23:36.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.732 "prchk_reftag": false, 00:23:36.732 "prchk_guard": false, 00:23:36.732 "ctrlr_loss_timeout_sec": 0, 00:23:36.732 "reconnect_delay_sec": 0, 00:23:36.732 "fast_io_fail_timeout_sec": 0, 00:23:36.732 "psk": "key0", 00:23:36.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.732 "hdgst": false, 00:23:36.732 "ddgst": false, 00:23:36.732 "multipath": "multipath" 00:23:36.732 } 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "method": "bdev_nvme_set_hotplug", 00:23:36.732 "params": { 00:23:36.732 "period_us": 100000, 00:23:36.732 "enable": false 00:23:36.732 } 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "method": "bdev_enable_histogram", 00:23:36.732 "params": { 00:23:36.732 "name": "nvme0n1", 00:23:36.732 "enable": true 00:23:36.732 } 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "method": "bdev_wait_for_examine" 00:23:36.732 } 00:23:36.732 ] 00:23:36.732 }, 00:23:36.732 { 00:23:36.732 "subsystem": "nbd", 00:23:36.732 "config": [] 00:23:36.732 } 00:23:36.732 ] 00:23:36.732 }' 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 950218 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 950218 ']' 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 950218 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950218 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950218' 00:23:36.732 killing process with pid 950218 00:23:36.732 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 950218 00:23:36.733 Received shutdown signal, test time was about 1.000000 seconds 00:23:36.733 00:23:36.733 Latency(us) 00:23:36.733 [2024-11-29T12:07:39.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.733 [2024-11-29T12:07:39.413Z] =================================================================================================================== 00:23:36.733 [2024-11-29T12:07:39.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.733 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 950218 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 949968 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 949968 ']' 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 949968 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949968 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949968' 00:23:36.994 killing process with pid 949968 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 949968 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 949968 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.994 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:36.994 "subsystems": [ 00:23:36.994 { 00:23:36.994 "subsystem": "keyring", 00:23:36.994 "config": [ 00:23:36.994 { 00:23:36.994 "method": "keyring_file_add_key", 00:23:36.994 "params": { 00:23:36.994 "name": "key0", 00:23:36.994 "path": "/tmp/tmp.rkkm9JK3VS" 00:23:36.994 } 00:23:36.994 } 00:23:36.994 ] 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "subsystem": "iobuf", 00:23:36.994 "config": [ 00:23:36.994 { 00:23:36.994 "method": "iobuf_set_options", 00:23:36.994 "params": { 00:23:36.994 "small_pool_count": 8192, 00:23:36.994 "large_pool_count": 1024, 00:23:36.994 "small_bufsize": 8192, 00:23:36.994 "large_bufsize": 135168, 00:23:36.994 "enable_numa": false 00:23:36.994 } 00:23:36.994 } 00:23:36.994 ] 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "subsystem": "sock", 00:23:36.994 "config": [ 00:23:36.994 { 00:23:36.994 "method": "sock_set_default_impl", 00:23:36.994 "params": { 00:23:36.994 "impl_name": "posix" 00:23:36.994 } 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "method": "sock_impl_set_options", 00:23:36.994 "params": { 00:23:36.994 "impl_name": "ssl", 00:23:36.994 "recv_buf_size": 4096, 00:23:36.994 "send_buf_size": 4096, 00:23:36.994 "enable_recv_pipe": true, 00:23:36.994 "enable_quickack": false, 00:23:36.994 "enable_placement_id": 0, 00:23:36.994 "enable_zerocopy_send_server": true, 00:23:36.994 "enable_zerocopy_send_client": false, 00:23:36.994 "zerocopy_threshold": 0, 00:23:36.994 "tls_version": 0, 00:23:36.994 "enable_ktls": false 00:23:36.994 } 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "method": "sock_impl_set_options", 00:23:36.994 "params": { 00:23:36.994 "impl_name": "posix", 00:23:36.994 "recv_buf_size": 2097152, 00:23:36.994 "send_buf_size": 2097152, 00:23:36.994 "enable_recv_pipe": true, 00:23:36.994 "enable_quickack": false, 00:23:36.994 "enable_placement_id": 0, 00:23:36.994 "enable_zerocopy_send_server": true, 00:23:36.994 "enable_zerocopy_send_client": false, 00:23:36.994 "zerocopy_threshold": 0, 00:23:36.994 "tls_version": 0, 00:23:36.994 "enable_ktls": false 00:23:36.994 } 00:23:36.994 } 00:23:36.994 ] 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "subsystem": "vmd", 00:23:36.994 "config": [] 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "subsystem": "accel", 00:23:36.994 "config": [ 00:23:36.994 { 00:23:36.994 "method": "accel_set_options", 00:23:36.994 "params": { 00:23:36.994 "small_cache_size": 128, 00:23:36.994 "large_cache_size": 16, 00:23:36.994 "task_count": 2048, 00:23:36.994 "sequence_count": 2048, 00:23:36.994 "buf_count": 2048 00:23:36.994 } 00:23:36.994 } 00:23:36.994 ] 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "subsystem": "bdev", 00:23:36.994 "config": [ 00:23:36.994 { 00:23:36.994 "method": "bdev_set_options", 00:23:36.994 "params": { 00:23:36.994 "bdev_io_pool_size": 65535, 00:23:36.994 "bdev_io_cache_size": 256, 00:23:36.994 "bdev_auto_examine": true, 00:23:36.994 "iobuf_small_cache_size": 128, 00:23:36.994 "iobuf_large_cache_size": 16 00:23:36.994 } 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "method": "bdev_raid_set_options", 00:23:36.994 "params": { 00:23:36.994 "process_window_size_kb": 1024, 00:23:36.994 "process_max_bandwidth_mb_sec": 0 00:23:36.994 } 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "method": "bdev_iscsi_set_options", 00:23:36.994 "params": { 00:23:36.994 "timeout_sec": 30 00:23:36.994 } 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "method": "bdev_nvme_set_options", 00:23:36.994 "params": { 00:23:36.994 "action_on_timeout": "none", 00:23:36.994 "timeout_us": 0, 00:23:36.994 "timeout_admin_us": 0, 00:23:36.994 "keep_alive_timeout_ms": 10000, 00:23:36.994 "arbitration_burst": 0, 00:23:36.994 "low_priority_weight": 0, 00:23:36.994 "medium_priority_weight": 0, 00:23:36.994 "high_priority_weight": 0, 00:23:36.994 "nvme_adminq_poll_period_us": 10000, 00:23:36.994 "nvme_ioq_poll_period_us": 0, 00:23:36.994 "io_queue_requests": 0, 00:23:36.994 "delay_cmd_submit": true, 00:23:36.994 "transport_retry_count": 4, 00:23:36.994 "bdev_retry_count": 3, 00:23:36.994 "transport_ack_timeout": 0, 00:23:36.994 "ctrlr_loss_timeout_sec": 0, 00:23:36.994 "reconnect_delay_sec": 0, 00:23:36.994 "fast_io_fail_timeout_sec": 0, 00:23:36.994 "disable_auto_failback": false, 00:23:36.994 "generate_uuids": false, 00:23:36.994 "transport_tos": 0, 00:23:36.994 "nvme_error_stat": false, 00:23:36.994 "rdma_srq_size": 0, 00:23:36.994 "io_path_stat": false, 00:23:36.994 "allow_accel_sequence": false, 00:23:36.994 "rdma_max_cq_size": 0, 00:23:36.994 "rdma_cm_event_timeout_ms": 0, 00:23:36.994 "dhchap_digests": [ 00:23:36.994 "sha256", 00:23:36.994 "sha384", 00:23:36.994 "sha512" 00:23:36.994 ], 00:23:36.994 "dhchap_dhgroups": [ 00:23:36.994 "null", 00:23:36.994 "ffdhe2048", 00:23:36.994 "ffdhe3072", 00:23:36.994 "ffdhe4096", 00:23:36.994 "ffdhe6144", 00:23:36.994 "ffdhe8192" 00:23:36.994 ] 00:23:36.994 } 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "method": "bdev_nvme_set_hotplug", 00:23:36.994 "params": { 00:23:36.994 "period_us": 100000, 00:23:36.994 "enable": false 00:23:36.994 } 00:23:36.994 }, 00:23:36.994 { 00:23:36.994 "method": "bdev_malloc_create", 00:23:36.994 "params": { 00:23:36.994 "name": "malloc0", 00:23:36.994 "num_blocks": 8192, 00:23:36.994 "block_size": 4096, 00:23:36.994 "physical_block_size": 4096, 00:23:36.994 "uuid": "026452b9-6e86-4eac-a1c5-ea66ea57375b", 00:23:36.994 "optimal_io_boundary": 0, 00:23:36.994 "md_size": 0, 00:23:36.994 "dif_type": 0, 00:23:36.994 "dif_is_head_of_md": false, 00:23:36.994 "dif_pi_format": 0 00:23:36.994 } 00:23:36.994 }, 00:23:36.995 { 00:23:36.995 "method": "bdev_wait_for_examine" 00:23:36.995 } 00:23:36.995 ] 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "subsystem": "nbd", 00:23:36.995 "config": [] 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "subsystem": "scheduler", 00:23:36.995 "config": [ 00:23:36.995 { 00:23:36.995 "method": "framework_set_scheduler", 00:23:36.995 "params": { 00:23:36.995 "name": "static" 00:23:36.995 } 00:23:36.995 } 00:23:36.995 ] 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "subsystem": "nvmf", 00:23:36.995 "config": [ 00:23:36.995 { 00:23:36.995 "method": "nvmf_set_config", 00:23:36.995 "params": { 00:23:36.995 "discovery_filter": "match_any", 00:23:36.995 "admin_cmd_passthru": { 00:23:36.995 "identify_ctrlr": false 00:23:36.995 }, 00:23:36.995 "dhchap_digests": [ 00:23:36.995 "sha256", 00:23:36.995 "sha384", 00:23:36.995 "sha512" 00:23:36.995 ], 00:23:36.995 "dhchap_dhgroups": [ 00:23:36.995 "null", 00:23:36.995 "ffdhe2048", 00:23:36.995 "ffdhe3072", 00:23:36.995 "ffdhe4096", 00:23:36.995 "ffdhe6144", 00:23:36.995 "ffdhe8192" 00:23:36.995 ] 00:23:36.995 } 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "method": "nvmf_set_max_subsystems", 00:23:36.995 "params": { 00:23:36.995 "max_subsystems": 1024 00:23:36.995 } 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "method": "nvmf_set_crdt", 00:23:36.995 "params": { 00:23:36.995 "crdt1": 0, 00:23:36.995 "crdt2": 0, 00:23:36.995 "crdt3": 0 00:23:36.995 } 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "method": "nvmf_create_transport", 00:23:36.995 "params": { 00:23:36.995 "trtype": "TCP", 00:23:36.995 "max_queue_depth": 128, 00:23:36.995 "max_io_qpairs_per_ctrlr": 127, 00:23:36.995 "in_capsule_data_size": 4096, 00:23:36.995 "max_io_size": 131072, 00:23:36.995 "io_unit_size": 131072, 00:23:36.995 "max_aq_depth": 128, 00:23:36.995 "num_shared_buffers": 511, 00:23:36.995 "buf_cache_size": 4294967295, 00:23:36.995 "dif_insert_or_strip": false, 00:23:36.995 "zcopy": false, 00:23:36.995 "c2h_success": false, 00:23:36.995 "sock_priority": 0, 00:23:36.995 "abort_timeout_sec": 1, 00:23:36.995 "ack_timeout": 0, 00:23:36.995 "data_wr_pool_size": 0 00:23:36.995 } 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "method": "nvmf_create_subsystem", 00:23:36.995 "params": { 00:23:36.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.995 "allow_any_host": false, 00:23:36.995 "serial_number": "00000000000000000000", 00:23:36.995 "model_number": "SPDK bdev Controller", 00:23:36.995 "max_namespaces": 32, 00:23:36.995 "min_cntlid": 1, 00:23:36.995 "max_cntlid": 65519, 00:23:36.995 "ana_reporting": false 00:23:36.995 } 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "method": "nvmf_subsystem_add_host", 00:23:36.995 "params": { 00:23:36.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.995 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.995 "psk": "key0" 00:23:36.995 } 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "method": "nvmf_subsystem_add_ns", 00:23:36.995 "params": { 00:23:36.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.995 "namespace": { 00:23:36.995 "nsid": 1, 00:23:36.995 "bdev_name": "malloc0", 00:23:36.995 "nguid": "026452B96E864EACA1C5EA66EA57375B", 00:23:36.995 "uuid": "026452b9-6e86-4eac-a1c5-ea66ea57375b", 00:23:36.995 "no_auto_visible": false 00:23:36.995 } 00:23:36.995 } 00:23:36.995 }, 00:23:36.995 { 00:23:36.995 "method": "nvmf_subsystem_add_listener", 00:23:36.995 "params": { 00:23:36.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.995 "listen_address": { 00:23:36.995 "trtype": "TCP", 00:23:36.995 "adrfam": "IPv4", 00:23:36.995 "traddr": "10.0.0.2", 00:23:36.995 "trsvcid": "4420" 00:23:36.995 }, 00:23:36.995 "secure_channel": false, 00:23:36.995 "sock_impl": "ssl" 00:23:36.995 } 00:23:36.995 } 00:23:36.995 ] 00:23:36.995 } 00:23:36.995 ] 00:23:36.995 }' 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=950905 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 950905 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 950905 ']' 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.995 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.255 [2024-11-29 13:07:39.692930] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:37.255 [2024-11-29 13:07:39.692988] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.255 [2024-11-29 13:07:39.782377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.255 [2024-11-29 13:07:39.811926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.255 [2024-11-29 13:07:39.811954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.255 [2024-11-29 13:07:39.811959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.255 [2024-11-29 13:07:39.811964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.255 [2024-11-29 13:07:39.811968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.255 [2024-11-29 13:07:39.812451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.515 [2024-11-29 13:07:40.006791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.515 [2024-11-29 13:07:40.038817] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.515 [2024-11-29 13:07:40.039013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=950938 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 950938 /var/tmp/bdevperf.sock 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 950938 ']' 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.086 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:38.086 "subsystems": [ 00:23:38.086 { 00:23:38.086 "subsystem": "keyring", 00:23:38.086 "config": [ 00:23:38.086 { 00:23:38.086 "method": "keyring_file_add_key", 00:23:38.086 "params": { 00:23:38.086 "name": "key0", 00:23:38.086 "path": "/tmp/tmp.rkkm9JK3VS" 00:23:38.086 } 00:23:38.086 } 00:23:38.086 ] 00:23:38.086 }, 00:23:38.086 { 00:23:38.086 "subsystem": "iobuf", 00:23:38.086 "config": [ 00:23:38.086 { 00:23:38.086 "method": "iobuf_set_options", 00:23:38.086 "params": { 00:23:38.086 "small_pool_count": 8192, 00:23:38.086 "large_pool_count": 1024, 00:23:38.086 "small_bufsize": 8192, 00:23:38.086 "large_bufsize": 135168, 00:23:38.086 "enable_numa": false 00:23:38.086 } 00:23:38.086 } 00:23:38.086 ] 00:23:38.086 }, 00:23:38.086 { 00:23:38.086 "subsystem": "sock", 00:23:38.086 "config": [ 00:23:38.086 { 00:23:38.086 "method": "sock_set_default_impl", 00:23:38.086 "params": { 00:23:38.086 "impl_name": "posix" 00:23:38.086 } 00:23:38.086 }, 00:23:38.086 { 00:23:38.086 "method": "sock_impl_set_options", 00:23:38.086 "params": { 00:23:38.086 "impl_name": "ssl", 00:23:38.086 "recv_buf_size": 4096, 00:23:38.086 "send_buf_size": 4096, 00:23:38.086 "enable_recv_pipe": true, 00:23:38.086 "enable_quickack": false, 00:23:38.086 "enable_placement_id": 0, 00:23:38.086 "enable_zerocopy_send_server": true, 00:23:38.086 "enable_zerocopy_send_client": false, 00:23:38.086 "zerocopy_threshold": 0, 00:23:38.086 "tls_version": 0, 00:23:38.086 "enable_ktls": false 00:23:38.086 } 00:23:38.086 }, 00:23:38.086 { 00:23:38.086 "method": "sock_impl_set_options", 00:23:38.086 "params": { 00:23:38.086 "impl_name": "posix", 00:23:38.086 "recv_buf_size": 2097152, 00:23:38.086 "send_buf_size": 2097152, 00:23:38.087 "enable_recv_pipe": true, 00:23:38.087 "enable_quickack": false, 00:23:38.087 "enable_placement_id": 0, 00:23:38.087 "enable_zerocopy_send_server": true, 00:23:38.087 "enable_zerocopy_send_client": false, 00:23:38.087 "zerocopy_threshold": 0, 00:23:38.087 "tls_version": 0, 00:23:38.087 "enable_ktls": false 00:23:38.087 } 00:23:38.087 } 00:23:38.087 ] 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "subsystem": "vmd", 00:23:38.087 "config": [] 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "subsystem": "accel", 00:23:38.087 "config": [ 00:23:38.087 { 00:23:38.087 "method": "accel_set_options", 00:23:38.087 "params": { 00:23:38.087 "small_cache_size": 128, 00:23:38.087 "large_cache_size": 16, 00:23:38.087 "task_count": 2048, 00:23:38.087 "sequence_count": 2048, 00:23:38.087 "buf_count": 2048 00:23:38.087 } 00:23:38.087 } 00:23:38.087 ] 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "subsystem": "bdev", 00:23:38.087 "config": [ 00:23:38.087 { 00:23:38.087 "method": "bdev_set_options", 00:23:38.087 "params": { 00:23:38.087 "bdev_io_pool_size": 65535, 00:23:38.087 "bdev_io_cache_size": 256, 00:23:38.087 "bdev_auto_examine": true, 00:23:38.087 "iobuf_small_cache_size": 128, 00:23:38.087 "iobuf_large_cache_size": 16 00:23:38.087 } 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "method": "bdev_raid_set_options", 00:23:38.087 "params": { 00:23:38.087 "process_window_size_kb": 1024, 00:23:38.087 "process_max_bandwidth_mb_sec": 0 00:23:38.087 } 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "method": "bdev_iscsi_set_options", 00:23:38.087 "params": { 00:23:38.087 "timeout_sec": 30 00:23:38.087 } 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "method": "bdev_nvme_set_options", 00:23:38.087 "params": { 00:23:38.087 "action_on_timeout": "none", 00:23:38.087 "timeout_us": 0, 00:23:38.087 "timeout_admin_us": 0, 00:23:38.087 "keep_alive_timeout_ms": 10000, 00:23:38.087 "arbitration_burst": 0, 00:23:38.087 "low_priority_weight": 0, 00:23:38.087 "medium_priority_weight": 0, 00:23:38.087 "high_priority_weight": 0, 00:23:38.087 "nvme_adminq_poll_period_us": 10000, 00:23:38.087 "nvme_ioq_poll_period_us": 0, 00:23:38.087 "io_queue_requests": 512, 00:23:38.087 "delay_cmd_submit": true, 00:23:38.087 "transport_retry_count": 4, 00:23:38.087 "bdev_retry_count": 3, 00:23:38.087 "transport_ack_timeout": 0, 00:23:38.087 "ctrlr_loss_timeout_sec": 0, 00:23:38.087 "reconnect_delay_sec": 0, 00:23:38.087 "fast_io_fail_timeout_sec": 0, 00:23:38.087 "disable_auto_failback": false, 00:23:38.087 "generate_uuids": false, 00:23:38.087 "transport_tos": 0, 00:23:38.087 "nvme_error_stat": false, 00:23:38.087 "rdma_srq_size": 0, 00:23:38.087 "io_path_stat": false, 00:23:38.087 "allow_accel_sequence": false, 00:23:38.087 "rdma_max_cq_size": 0, 00:23:38.087 "rdma_cm_event_timeout_ms": 0, 00:23:38.087 "dhchap_digests": [ 00:23:38.087 "sha256", 00:23:38.087 "sha384", 00:23:38.087 "sha512" 00:23:38.087 ], 00:23:38.087 "dhchap_dhgroups": [ 00:23:38.087 "null", 00:23:38.087 "ffdhe2048", 00:23:38.087 "ffdhe3072", 00:23:38.087 "ffdhe4096", 00:23:38.087 "ffdhe6144", 00:23:38.087 "ffdhe8192" 00:23:38.087 ] 00:23:38.087 } 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "method": "bdev_nvme_attach_controller", 00:23:38.087 "params": { 00:23:38.087 "name": "nvme0", 00:23:38.087 "trtype": "TCP", 00:23:38.087 "adrfam": "IPv4", 00:23:38.087 "traddr": "10.0.0.2", 00:23:38.087 "trsvcid": "4420", 00:23:38.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.087 "prchk_reftag": false, 00:23:38.087 "prchk_guard": false, 00:23:38.087 "ctrlr_loss_timeout_sec": 0, 00:23:38.087 "reconnect_delay_sec": 0, 00:23:38.087 "fast_io_fail_timeout_sec": 0, 00:23:38.087 "psk": "key0", 00:23:38.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.087 "hdgst": false, 00:23:38.087 "ddgst": false, 00:23:38.087 "multipath": "multipath" 00:23:38.087 } 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "method": "bdev_nvme_set_hotplug", 00:23:38.087 "params": { 00:23:38.087 "period_us": 100000, 00:23:38.087 "enable": false 00:23:38.087 } 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "method": "bdev_enable_histogram", 00:23:38.087 "params": { 00:23:38.087 "name": "nvme0n1", 00:23:38.087 "enable": true 00:23:38.087 } 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "method": "bdev_wait_for_examine" 00:23:38.087 } 00:23:38.087 ] 00:23:38.087 }, 00:23:38.087 { 00:23:38.087 "subsystem": "nbd", 00:23:38.087 "config": [] 00:23:38.087 } 00:23:38.087 ] 00:23:38.087 }' 00:23:38.087 [2024-11-29 13:07:40.581877] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:38.087 [2024-11-29 13:07:40.581945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950938 ] 00:23:38.087 [2024-11-29 13:07:40.668340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.087 [2024-11-29 13:07:40.698395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.348 [2024-11-29 13:07:40.834290] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.919 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.919 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.919 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.919 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:38.919 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.919 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.178 Running I/O for 1 seconds... 00:23:40.118 5380.00 IOPS, 21.02 MiB/s 00:23:40.118 Latency(us) 00:23:40.118 [2024-11-29T12:07:42.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.118 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:40.118 Verification LBA range: start 0x0 length 0x2000 00:23:40.118 nvme0n1 : 1.05 5267.12 20.57 0.00 0.00 23825.58 6171.31 44782.93 00:23:40.118 [2024-11-29T12:07:42.798Z] =================================================================================================================== 00:23:40.118 [2024-11-29T12:07:42.798Z] Total : 5267.12 20.57 0.00 0.00 23825.58 6171.31 44782.93 00:23:40.118 { 00:23:40.118 "results": [ 00:23:40.118 { 00:23:40.118 "job": "nvme0n1", 00:23:40.118 "core_mask": "0x2", 00:23:40.118 "workload": "verify", 00:23:40.118 "status": "finished", 00:23:40.118 "verify_range": { 00:23:40.118 "start": 0, 00:23:40.118 "length": 8192 00:23:40.118 }, 00:23:40.118 "queue_depth": 128, 00:23:40.118 "io_size": 4096, 00:23:40.118 "runtime": 1.045922, 00:23:40.118 "iops": 5267.1231697966, 00:23:40.118 "mibps": 20.57469988201797, 00:23:40.118 "io_failed": 0, 00:23:40.118 "io_timeout": 0, 00:23:40.118 "avg_latency_us": 23825.582167362496, 00:23:40.118 "min_latency_us": 6171.306666666666, 00:23:40.118 "max_latency_us": 44782.933333333334 00:23:40.118 } 00:23:40.118 ], 00:23:40.118 "core_count": 1 00:23:40.118 } 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:40.118 nvmf_trace.0 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 950938 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 950938 ']' 00:23:40.118 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 950938 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950938 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950938' 00:23:40.379 killing process with pid 950938 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 950938 00:23:40.379 Received shutdown signal, test time was about 1.000000 seconds 00:23:40.379 00:23:40.379 Latency(us) 00:23:40.379 [2024-11-29T12:07:43.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.379 [2024-11-29T12:07:43.059Z] =================================================================================================================== 00:23:40.379 [2024-11-29T12:07:43.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 950938 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.379 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.379 rmmod nvme_tcp 00:23:40.379 rmmod nvme_fabrics 00:23:40.379 rmmod nvme_keyring 00:23:40.379 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.380 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:40.380 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:40.380 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 950905 ']' 00:23:40.380 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 950905 00:23:40.380 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 950905 ']' 00:23:40.380 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 950905 00:23:40.380 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.380 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.380 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950905 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950905' 00:23:40.640 killing process with pid 950905 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 950905 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 950905 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.640 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.lVvNSn6xoQ /tmp/tmp.pwvLxzFEnO /tmp/tmp.rkkm9JK3VS 00:23:43.190 00:23:43.190 real 1m28.492s 00:23:43.190 user 2m20.282s 00:23:43.190 sys 0m27.123s 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.190 ************************************ 00:23:43.190 END TEST nvmf_tls 00:23:43.190 ************************************ 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:43.190 ************************************ 00:23:43.190 START TEST nvmf_fips 00:23:43.190 ************************************ 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:43.190 * Looking for test storage... 00:23:43.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:43.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.190 --rc genhtml_branch_coverage=1 00:23:43.190 --rc genhtml_function_coverage=1 00:23:43.190 --rc genhtml_legend=1 00:23:43.190 --rc geninfo_all_blocks=1 00:23:43.190 --rc geninfo_unexecuted_blocks=1 00:23:43.190 00:23:43.190 ' 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:43.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.190 --rc genhtml_branch_coverage=1 00:23:43.190 --rc genhtml_function_coverage=1 00:23:43.190 --rc genhtml_legend=1 00:23:43.190 --rc geninfo_all_blocks=1 00:23:43.190 --rc geninfo_unexecuted_blocks=1 00:23:43.190 00:23:43.190 ' 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:43.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.190 --rc genhtml_branch_coverage=1 00:23:43.190 --rc genhtml_function_coverage=1 00:23:43.190 --rc genhtml_legend=1 00:23:43.190 --rc geninfo_all_blocks=1 00:23:43.190 --rc geninfo_unexecuted_blocks=1 00:23:43.190 00:23:43.190 ' 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:43.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.190 --rc genhtml_branch_coverage=1 00:23:43.190 --rc genhtml_function_coverage=1 00:23:43.190 --rc genhtml_legend=1 00:23:43.190 --rc geninfo_all_blocks=1 00:23:43.190 --rc geninfo_unexecuted_blocks=1 00:23:43.190 00:23:43.190 ' 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.190 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:43.191 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:43.191 Error setting digest 00:23:43.191 40528F90D17F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:43.192 40528F90D17F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.192 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.340 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:51.341 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:51.341 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:51.341 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:51.341 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.341 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:23:51.341 00:23:51.341 --- 10.0.0.2 ping statistics --- 00:23:51.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.341 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:23:51.341 00:23:51.341 --- 10.0.0.1 ping statistics --- 00:23:51.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.341 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=955751 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 955751 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 955751 ']' 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.341 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.341 [2024-11-29 13:07:53.376322] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:51.341 [2024-11-29 13:07:53.376396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.341 [2024-11-29 13:07:53.475785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.341 [2024-11-29 13:07:53.526572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.341 [2024-11-29 13:07:53.526623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.341 [2024-11-29 13:07:53.526631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.341 [2024-11-29 13:07:53.526638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.341 [2024-11-29 13:07:53.526645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.341 [2024-11-29 13:07:53.527442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Fmc 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Fmc 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Fmc 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Fmc 00:23:51.602 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:51.863 [2024-11-29 13:07:54.379284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.863 [2024-11-29 13:07:54.395265] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.863 [2024-11-29 13:07:54.395580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.863 malloc0 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=955990 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 955990 /var/tmp/bdevperf.sock 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 955990 ']' 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.863 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.864 [2024-11-29 13:07:54.538602] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:23:51.864 [2024-11-29 13:07:54.538686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955990 ] 00:23:52.125 [2024-11-29 13:07:54.633446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.125 [2024-11-29 13:07:54.684343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.696 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.696 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:52.696 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Fmc 00:23:52.956 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.215 [2024-11-29 13:07:55.675602] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.215 TLSTESTn1 00:23:53.215 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.215 Running I/O for 10 seconds... 00:23:55.536 5588.00 IOPS, 21.83 MiB/s [2024-11-29T12:07:59.178Z] 5257.50 IOPS, 20.54 MiB/s [2024-11-29T12:08:00.117Z] 4929.67 IOPS, 19.26 MiB/s [2024-11-29T12:08:01.058Z] 5219.00 IOPS, 20.39 MiB/s [2024-11-29T12:08:01.997Z] 5392.60 IOPS, 21.06 MiB/s [2024-11-29T12:08:02.938Z] 5408.33 IOPS, 21.13 MiB/s [2024-11-29T12:08:04.321Z] 5325.57 IOPS, 20.80 MiB/s [2024-11-29T12:08:05.264Z] 5430.00 IOPS, 21.21 MiB/s [2024-11-29T12:08:06.205Z] 5488.33 IOPS, 21.44 MiB/s [2024-11-29T12:08:06.205Z] 5487.20 IOPS, 21.43 MiB/s 00:24:03.525 Latency(us) 00:24:03.525 [2024-11-29T12:08:06.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.525 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:03.525 Verification LBA range: start 0x0 length 0x2000 00:24:03.525 TLSTESTn1 : 10.02 5491.16 21.45 0.00 0.00 23275.95 6389.76 26542.08 00:24:03.525 [2024-11-29T12:08:06.205Z] =================================================================================================================== 00:24:03.525 [2024-11-29T12:08:06.205Z] Total : 5491.16 21.45 0.00 0.00 23275.95 6389.76 26542.08 00:24:03.525 { 00:24:03.525 "results": [ 00:24:03.525 { 00:24:03.525 "job": "TLSTESTn1", 00:24:03.525 "core_mask": "0x4", 00:24:03.525 "workload": "verify", 00:24:03.525 "status": "finished", 00:24:03.525 "verify_range": { 00:24:03.525 "start": 0, 00:24:03.525 "length": 8192 00:24:03.525 }, 00:24:03.525 "queue_depth": 128, 00:24:03.525 "io_size": 4096, 00:24:03.525 "runtime": 10.015918, 00:24:03.525 "iops": 5491.159172828691, 00:24:03.525 "mibps": 21.449840518862075, 00:24:03.525 "io_failed": 0, 00:24:03.525 "io_timeout": 0, 00:24:03.525 "avg_latency_us": 23275.951887610077, 00:24:03.525 "min_latency_us": 6389.76, 00:24:03.525 "max_latency_us": 26542.08 00:24:03.525 } 00:24:03.525 ], 00:24:03.525 "core_count": 1 00:24:03.525 } 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:03.525 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.525 nvmf_trace.0 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 955990 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 955990 ']' 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 955990 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 955990 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 955990' 00:24:03.525 killing process with pid 955990 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 955990 00:24:03.525 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.525 00:24:03.525 Latency(us) 00:24:03.525 [2024-11-29T12:08:06.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.525 [2024-11-29T12:08:06.205Z] =================================================================================================================== 00:24:03.525 [2024-11-29T12:08:06.205Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.525 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 955990 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.787 rmmod nvme_tcp 00:24:03.787 rmmod nvme_fabrics 00:24:03.787 rmmod nvme_keyring 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 955751 ']' 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 955751 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 955751 ']' 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 955751 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 955751 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 955751' 00:24:03.787 killing process with pid 955751 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 955751 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 955751 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.787 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.788 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:03.788 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:03.788 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.788 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.049 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.050 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.050 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.050 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.050 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.966 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.966 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Fmc 00:24:05.966 00:24:05.966 real 0m23.162s 00:24:05.966 user 0m24.811s 00:24:05.966 sys 0m9.675s 00:24:05.966 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.966 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.966 ************************************ 00:24:05.966 END TEST nvmf_fips 00:24:05.966 ************************************ 00:24:05.966 13:08:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:05.966 13:08:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.966 13:08:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.966 13:08:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:05.966 ************************************ 00:24:05.966 START TEST nvmf_control_msg_list 00:24:05.966 ************************************ 00:24:05.966 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:06.228 * Looking for test storage... 00:24:06.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:06.228 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.229 --rc genhtml_branch_coverage=1 00:24:06.229 --rc genhtml_function_coverage=1 00:24:06.229 --rc genhtml_legend=1 00:24:06.229 --rc geninfo_all_blocks=1 00:24:06.229 --rc geninfo_unexecuted_blocks=1 00:24:06.229 00:24:06.229 ' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.229 --rc genhtml_branch_coverage=1 00:24:06.229 --rc genhtml_function_coverage=1 00:24:06.229 --rc genhtml_legend=1 00:24:06.229 --rc geninfo_all_blocks=1 00:24:06.229 --rc geninfo_unexecuted_blocks=1 00:24:06.229 00:24:06.229 ' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.229 --rc genhtml_branch_coverage=1 00:24:06.229 --rc genhtml_function_coverage=1 00:24:06.229 --rc genhtml_legend=1 00:24:06.229 --rc geninfo_all_blocks=1 00:24:06.229 --rc geninfo_unexecuted_blocks=1 00:24:06.229 00:24:06.229 ' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.229 --rc genhtml_branch_coverage=1 00:24:06.229 --rc genhtml_function_coverage=1 00:24:06.229 --rc genhtml_legend=1 00:24:06.229 --rc geninfo_all_blocks=1 00:24:06.229 --rc geninfo_unexecuted_blocks=1 00:24:06.229 00:24:06.229 ' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.229 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.376 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:14.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:14.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:14.377 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:14.377 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:24:14.377 00:24:14.377 --- 10.0.0.2 ping statistics --- 00:24:14.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.377 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:24:14.377 00:24:14.377 --- 10.0.0.1 ping statistics --- 00:24:14.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.377 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=962465 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 962465 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 962465 ']' 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.377 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.378 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.378 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.378 [2024-11-29 13:08:16.469866] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:24:14.378 [2024-11-29 13:08:16.469938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.378 [2024-11-29 13:08:16.570577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.378 [2024-11-29 13:08:16.621688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.378 [2024-11-29 13:08:16.621742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.378 [2024-11-29 13:08:16.621751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.378 [2024-11-29 13:08:16.621759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.378 [2024-11-29 13:08:16.621765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.378 [2024-11-29 13:08:16.622549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.638 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.639 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:14.639 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.639 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.639 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.901 [2024-11-29 13:08:17.326130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.901 Malloc0 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:14.901 [2024-11-29 13:08:17.380590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=962687 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=962688 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=962689 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 962687 00:24:14.901 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.901 [2024-11-29 13:08:17.481196] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:14.901 [2024-11-29 13:08:17.491205] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:14.901 [2024-11-29 13:08:17.491519] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:16.285 Initializing NVMe Controllers 00:24:16.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:16.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:16.286 Initialization complete. Launching workers. 00:24:16.286 ======================================================== 00:24:16.286 Latency(us) 00:24:16.286 Device Information : IOPS MiB/s Average min max 00:24:16.286 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40898.08 40780.88 40971.93 00:24:16.286 ======================================================== 00:24:16.286 Total : 25.00 0.10 40898.08 40780.88 40971.93 00:24:16.286 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 962688 00:24:16.286 Initializing NVMe Controllers 00:24:16.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:16.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:16.286 Initialization complete. Launching workers. 00:24:16.286 ======================================================== 00:24:16.286 Latency(us) 00:24:16.286 Device Information : IOPS MiB/s Average min max 00:24:16.286 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40896.38 40774.45 41001.34 00:24:16.286 ======================================================== 00:24:16.286 Total : 25.00 0.10 40896.38 40774.45 41001.34 00:24:16.286 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 962689 00:24:16.286 Initializing NVMe Controllers 00:24:16.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:16.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:16.286 Initialization complete. Launching workers. 00:24:16.286 ======================================================== 00:24:16.286 Latency(us) 00:24:16.286 Device Information : IOPS MiB/s Average min max 00:24:16.286 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40932.49 40823.73 41591.05 00:24:16.286 ======================================================== 00:24:16.286 Total : 25.00 0.10 40932.49 40823.73 41591.05 00:24:16.286 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.286 rmmod nvme_tcp 00:24:16.286 rmmod nvme_fabrics 00:24:16.286 rmmod nvme_keyring 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 962465 ']' 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 962465 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 962465 ']' 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 962465 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 962465 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 962465' 00:24:16.286 killing process with pid 962465 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 962465 00:24:16.286 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 962465 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.547 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.458 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.458 00:24:18.458 real 0m12.498s 00:24:18.458 user 0m8.239s 00:24:18.458 sys 0m6.502s 00:24:18.458 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.458 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:18.458 ************************************ 00:24:18.458 END TEST nvmf_control_msg_list 00:24:18.458 ************************************ 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:18.719 ************************************ 00:24:18.719 START TEST nvmf_wait_for_buf 00:24:18.719 ************************************ 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:18.719 * Looking for test storage... 00:24:18.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.719 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.980 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:18.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.980 --rc genhtml_branch_coverage=1 00:24:18.981 --rc genhtml_function_coverage=1 00:24:18.981 --rc genhtml_legend=1 00:24:18.981 --rc geninfo_all_blocks=1 00:24:18.981 --rc geninfo_unexecuted_blocks=1 00:24:18.981 00:24:18.981 ' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:18.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.981 --rc genhtml_branch_coverage=1 00:24:18.981 --rc genhtml_function_coverage=1 00:24:18.981 --rc genhtml_legend=1 00:24:18.981 --rc geninfo_all_blocks=1 00:24:18.981 --rc geninfo_unexecuted_blocks=1 00:24:18.981 00:24:18.981 ' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:18.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.981 --rc genhtml_branch_coverage=1 00:24:18.981 --rc genhtml_function_coverage=1 00:24:18.981 --rc genhtml_legend=1 00:24:18.981 --rc geninfo_all_blocks=1 00:24:18.981 --rc geninfo_unexecuted_blocks=1 00:24:18.981 00:24:18.981 ' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:18.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.981 --rc genhtml_branch_coverage=1 00:24:18.981 --rc genhtml_function_coverage=1 00:24:18.981 --rc genhtml_legend=1 00:24:18.981 --rc geninfo_all_blocks=1 00:24:18.981 --rc geninfo_unexecuted_blocks=1 00:24:18.981 00:24:18.981 ' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.981 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:27.325 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:27.325 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:27.325 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:27.325 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.325 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:24:27.326 00:24:27.326 --- 10.0.0.2 ping statistics --- 00:24:27.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.326 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:24:27.326 00:24:27.326 --- 10.0.0.1 ping statistics --- 00:24:27.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.326 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=967129 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 967129 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 967129 ']' 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.326 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.326 [2024-11-29 13:08:29.001180] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:24:27.326 [2024-11-29 13:08:29.001246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.326 [2024-11-29 13:08:29.099874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.326 [2024-11-29 13:08:29.150747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.326 [2024-11-29 13:08:29.150797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.326 [2024-11-29 13:08:29.150806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.326 [2024-11-29 13:08:29.150813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.326 [2024-11-29 13:08:29.150820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.326 [2024-11-29 13:08:29.151573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.326 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.327 Malloc0 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.327 [2024-11-29 13:08:29.978115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.327 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.592 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:27.592 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.592 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 [2024-11-29 13:08:30.014487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.592 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.592 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:27.592 [2024-11-29 13:08:30.118290] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:28.979 Initializing NVMe Controllers 00:24:28.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:28.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:28.979 Initialization complete. Launching workers. 00:24:28.979 ======================================================== 00:24:28.979 Latency(us) 00:24:28.979 Device Information : IOPS MiB/s Average min max 00:24:28.979 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.98 8018.00 63860.15 00:24:28.979 ======================================================== 00:24:28.979 Total : 129.00 16.12 32294.98 8018.00 63860.15 00:24:28.979 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.979 rmmod nvme_tcp 00:24:28.979 rmmod nvme_fabrics 00:24:28.979 rmmod nvme_keyring 00:24:28.979 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 967129 ']' 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 967129 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 967129 ']' 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 967129 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 967129 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 967129' 00:24:29.240 killing process with pid 967129 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 967129 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 967129 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.240 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.241 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:29.241 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:29.241 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.241 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.241 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.241 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.241 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.241 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.241 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.787 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.787 00:24:31.787 real 0m12.772s 00:24:31.787 user 0m5.139s 00:24:31.787 sys 0m6.219s 00:24:31.787 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.787 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.787 ************************************ 00:24:31.787 END TEST nvmf_wait_for_buf 00:24:31.787 ************************************ 00:24:31.787 13:08:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:24:31.787 13:08:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:31.787 13:08:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:24:31.787 13:08:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:24:31.787 13:08:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.787 13:08:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.940 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.940 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.940 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.940 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:39.940 ************************************ 00:24:39.940 START TEST nvmf_perf_adq 00:24:39.940 ************************************ 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:39.940 * Looking for test storage... 00:24:39.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.940 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:39.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.941 --rc genhtml_branch_coverage=1 00:24:39.941 --rc genhtml_function_coverage=1 00:24:39.941 --rc genhtml_legend=1 00:24:39.941 --rc geninfo_all_blocks=1 00:24:39.941 --rc geninfo_unexecuted_blocks=1 00:24:39.941 00:24:39.941 ' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:39.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.941 --rc genhtml_branch_coverage=1 00:24:39.941 --rc genhtml_function_coverage=1 00:24:39.941 --rc genhtml_legend=1 00:24:39.941 --rc geninfo_all_blocks=1 00:24:39.941 --rc geninfo_unexecuted_blocks=1 00:24:39.941 00:24:39.941 ' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:39.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.941 --rc genhtml_branch_coverage=1 00:24:39.941 --rc genhtml_function_coverage=1 00:24:39.941 --rc genhtml_legend=1 00:24:39.941 --rc geninfo_all_blocks=1 00:24:39.941 --rc geninfo_unexecuted_blocks=1 00:24:39.941 00:24:39.941 ' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:39.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.941 --rc genhtml_branch_coverage=1 00:24:39.941 --rc genhtml_function_coverage=1 00:24:39.941 --rc genhtml_legend=1 00:24:39.941 --rc geninfo_all_blocks=1 00:24:39.941 --rc geninfo_unexecuted_blocks=1 00:24:39.941 00:24:39.941 ' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.941 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.612 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:46.613 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:46.613 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:46.613 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:46.613 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:46.613 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:47.996 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:49.912 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:55.203 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:55.203 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.203 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:55.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:55.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:55.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:24:55.204 00:24:55.204 --- 10.0.0.2 ping statistics --- 00:24:55.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.204 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:24:55.204 00:24:55.204 --- 10.0.0.1 ping statistics --- 00:24:55.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.204 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=977435 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 977435 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 977435 ']' 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.204 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:55.465 [2024-11-29 13:08:57.882060] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:24:55.465 [2024-11-29 13:08:57.882127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.465 [2024-11-29 13:08:57.984249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.465 [2024-11-29 13:08:58.039062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.465 [2024-11-29 13:08:58.039117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.465 [2024-11-29 13:08:58.039126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.465 [2024-11-29 13:08:58.039134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.465 [2024-11-29 13:08:58.039141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.465 [2024-11-29 13:08:58.041236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.465 [2024-11-29 13:08:58.041342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.465 [2024-11-29 13:08:58.041505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.465 [2024-11-29 13:08:58.041507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.037 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.037 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:56.037 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:56.037 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:56.037 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.298 [2024-11-29 13:08:58.915185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.298 Malloc1 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.298 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.559 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.559 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.559 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.559 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:56.559 [2024-11-29 13:08:58.987980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.559 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.559 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=977635 00:24:56.559 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:56.559 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:58.477 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:58.477 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.477 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:58.477 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.477 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:58.477 "tick_rate": 2400000000, 00:24:58.477 "poll_groups": [ 00:24:58.477 { 00:24:58.477 "name": "nvmf_tgt_poll_group_000", 00:24:58.477 "admin_qpairs": 1, 00:24:58.477 "io_qpairs": 1, 00:24:58.477 "current_admin_qpairs": 1, 00:24:58.477 "current_io_qpairs": 1, 00:24:58.477 "pending_bdev_io": 0, 00:24:58.477 "completed_nvme_io": 16178, 00:24:58.477 "transports": [ 00:24:58.477 { 00:24:58.477 "trtype": "TCP" 00:24:58.477 } 00:24:58.477 ] 00:24:58.477 }, 00:24:58.477 { 00:24:58.477 "name": "nvmf_tgt_poll_group_001", 00:24:58.477 "admin_qpairs": 0, 00:24:58.477 "io_qpairs": 1, 00:24:58.477 "current_admin_qpairs": 0, 00:24:58.477 "current_io_qpairs": 1, 00:24:58.477 "pending_bdev_io": 0, 00:24:58.477 "completed_nvme_io": 18074, 00:24:58.477 "transports": [ 00:24:58.477 { 00:24:58.477 "trtype": "TCP" 00:24:58.477 } 00:24:58.477 ] 00:24:58.477 }, 00:24:58.477 { 00:24:58.477 "name": "nvmf_tgt_poll_group_002", 00:24:58.477 "admin_qpairs": 0, 00:24:58.477 "io_qpairs": 1, 00:24:58.477 "current_admin_qpairs": 0, 00:24:58.477 "current_io_qpairs": 1, 00:24:58.477 "pending_bdev_io": 0, 00:24:58.477 "completed_nvme_io": 17210, 00:24:58.477 "transports": [ 00:24:58.477 { 00:24:58.477 "trtype": "TCP" 00:24:58.477 } 00:24:58.477 ] 00:24:58.477 }, 00:24:58.477 { 00:24:58.477 "name": "nvmf_tgt_poll_group_003", 00:24:58.477 "admin_qpairs": 0, 00:24:58.477 "io_qpairs": 1, 00:24:58.477 "current_admin_qpairs": 0, 00:24:58.477 "current_io_qpairs": 1, 00:24:58.477 "pending_bdev_io": 0, 00:24:58.477 "completed_nvme_io": 16261, 00:24:58.477 "transports": [ 00:24:58.477 { 00:24:58.477 "trtype": "TCP" 00:24:58.477 } 00:24:58.477 ] 00:24:58.477 } 00:24:58.477 ] 00:24:58.477 }' 00:24:58.477 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:58.477 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:58.477 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:58.477 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:58.477 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 977635 00:25:06.617 Initializing NVMe Controllers 00:25:06.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:06.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:06.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:06.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:06.617 Initialization complete. Launching workers. 00:25:06.617 ======================================================== 00:25:06.617 Latency(us) 00:25:06.617 Device Information : IOPS MiB/s Average min max 00:25:06.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12513.00 48.88 5114.84 1144.93 12469.92 00:25:06.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13599.90 53.12 4705.18 1355.30 12564.46 00:25:06.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13445.20 52.52 4760.20 1326.77 11634.41 00:25:06.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12747.70 49.80 5020.15 1253.59 12957.58 00:25:06.618 ======================================================== 00:25:06.618 Total : 52305.79 204.32 4894.09 1144.93 12957.58 00:25:06.618 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.618 rmmod nvme_tcp 00:25:06.618 rmmod nvme_fabrics 00:25:06.618 rmmod nvme_keyring 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 977435 ']' 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 977435 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 977435 ']' 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 977435 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 977435 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 977435' 00:25:06.618 killing process with pid 977435 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 977435 00:25:06.618 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 977435 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.879 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.428 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:09.428 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:25:09.428 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:09.428 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:10.370 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:12.915 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:18.205 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:18.205 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:18.205 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:18.205 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.205 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:18.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:25:18.206 00:25:18.206 --- 10.0.0.2 ping statistics --- 00:25:18.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.206 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:25:18.206 00:25:18.206 --- 10.0.0.1 ping statistics --- 00:25:18.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.206 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:18.206 net.core.busy_poll = 1 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:18.206 net.core.busy_read = 1 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=982810 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 982810 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 982810 ']' 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.206 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.206 [2024-11-29 13:09:20.748451] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:25:18.206 [2024-11-29 13:09:20.748524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.206 [2024-11-29 13:09:20.848148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:18.468 [2024-11-29 13:09:20.901406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.468 [2024-11-29 13:09:20.901459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.468 [2024-11-29 13:09:20.901468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.468 [2024-11-29 13:09:20.901476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.468 [2024-11-29 13:09:20.901482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.468 [2024-11-29 13:09:20.903513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.468 [2024-11-29 13:09:20.903673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.468 [2024-11-29 13:09:20.903840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.468 [2024-11-29 13:09:20.903840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.041 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.302 [2024-11-29 13:09:21.773540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.302 Malloc1 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:19.302 [2024-11-29 13:09:21.856143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=983020 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:25:19.302 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:21.219 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:25:21.219 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.219 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:21.219 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.219 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:25:21.219 "tick_rate": 2400000000, 00:25:21.219 "poll_groups": [ 00:25:21.219 { 00:25:21.219 "name": "nvmf_tgt_poll_group_000", 00:25:21.219 "admin_qpairs": 1, 00:25:21.219 "io_qpairs": 2, 00:25:21.219 "current_admin_qpairs": 1, 00:25:21.219 "current_io_qpairs": 2, 00:25:21.219 "pending_bdev_io": 0, 00:25:21.219 "completed_nvme_io": 25997, 00:25:21.219 "transports": [ 00:25:21.219 { 00:25:21.219 "trtype": "TCP" 00:25:21.219 } 00:25:21.219 ] 00:25:21.219 }, 00:25:21.219 { 00:25:21.219 "name": "nvmf_tgt_poll_group_001", 00:25:21.219 "admin_qpairs": 0, 00:25:21.219 "io_qpairs": 2, 00:25:21.219 "current_admin_qpairs": 0, 00:25:21.219 "current_io_qpairs": 2, 00:25:21.219 "pending_bdev_io": 0, 00:25:21.219 "completed_nvme_io": 27294, 00:25:21.219 "transports": [ 00:25:21.219 { 00:25:21.219 "trtype": "TCP" 00:25:21.219 } 00:25:21.219 ] 00:25:21.219 }, 00:25:21.219 { 00:25:21.219 "name": "nvmf_tgt_poll_group_002", 00:25:21.219 "admin_qpairs": 0, 00:25:21.219 "io_qpairs": 0, 00:25:21.219 "current_admin_qpairs": 0, 00:25:21.219 "current_io_qpairs": 0, 00:25:21.219 "pending_bdev_io": 0, 00:25:21.219 "completed_nvme_io": 0, 00:25:21.219 "transports": [ 00:25:21.219 { 00:25:21.219 "trtype": "TCP" 00:25:21.219 } 00:25:21.219 ] 00:25:21.219 }, 00:25:21.219 { 00:25:21.219 "name": "nvmf_tgt_poll_group_003", 00:25:21.220 "admin_qpairs": 0, 00:25:21.220 "io_qpairs": 0, 00:25:21.220 "current_admin_qpairs": 0, 00:25:21.220 "current_io_qpairs": 0, 00:25:21.220 "pending_bdev_io": 0, 00:25:21.220 "completed_nvme_io": 0, 00:25:21.220 "transports": [ 00:25:21.220 { 00:25:21.220 "trtype": "TCP" 00:25:21.220 } 00:25:21.220 ] 00:25:21.220 } 00:25:21.220 ] 00:25:21.220 }' 00:25:21.220 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:21.220 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:25:21.480 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:25:21.480 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:25:21.480 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 983020 00:25:29.617 Initializing NVMe Controllers 00:25:29.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:29.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:29.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:29.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:29.617 Initialization complete. Launching workers. 00:25:29.617 ======================================================== 00:25:29.617 Latency(us) 00:25:29.617 Device Information : IOPS MiB/s Average min max 00:25:29.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6897.90 26.94 9279.21 1413.18 55406.10 00:25:29.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8862.30 34.62 7220.89 892.36 54134.85 00:25:29.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9268.20 36.20 6931.36 1070.56 54288.56 00:25:29.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12548.30 49.02 5099.74 852.85 53930.20 00:25:29.617 ======================================================== 00:25:29.617 Total : 37576.69 146.78 6818.99 852.85 55406.10 00:25:29.617 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.617 rmmod nvme_tcp 00:25:29.617 rmmod nvme_fabrics 00:25:29.617 rmmod nvme_keyring 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 982810 ']' 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 982810 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 982810 ']' 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 982810 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982810 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982810' 00:25:29.617 killing process with pid 982810 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 982810 00:25:29.617 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 982810 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.877 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.788 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.788 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:25:31.788 00:25:31.788 real 0m53.193s 00:25:31.788 user 2m49.903s 00:25:31.788 sys 0m11.654s 00:25:31.788 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.788 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:31.788 ************************************ 00:25:31.788 END TEST nvmf_perf_adq 00:25:31.788 ************************************ 00:25:31.788 13:09:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:31.788 13:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:31.788 13:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.788 13:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:32.051 ************************************ 00:25:32.051 START TEST nvmf_shutdown 00:25:32.051 ************************************ 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:32.051 * Looking for test storage... 00:25:32.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:32.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.051 --rc genhtml_branch_coverage=1 00:25:32.051 --rc genhtml_function_coverage=1 00:25:32.051 --rc genhtml_legend=1 00:25:32.051 --rc geninfo_all_blocks=1 00:25:32.051 --rc geninfo_unexecuted_blocks=1 00:25:32.051 00:25:32.051 ' 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:32.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.051 --rc genhtml_branch_coverage=1 00:25:32.051 --rc genhtml_function_coverage=1 00:25:32.051 --rc genhtml_legend=1 00:25:32.051 --rc geninfo_all_blocks=1 00:25:32.051 --rc geninfo_unexecuted_blocks=1 00:25:32.051 00:25:32.051 ' 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:32.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.051 --rc genhtml_branch_coverage=1 00:25:32.051 --rc genhtml_function_coverage=1 00:25:32.051 --rc genhtml_legend=1 00:25:32.051 --rc geninfo_all_blocks=1 00:25:32.051 --rc geninfo_unexecuted_blocks=1 00:25:32.051 00:25:32.051 ' 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:32.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.051 --rc genhtml_branch_coverage=1 00:25:32.051 --rc genhtml_function_coverage=1 00:25:32.051 --rc genhtml_legend=1 00:25:32.051 --rc geninfo_all_blocks=1 00:25:32.051 --rc geninfo_unexecuted_blocks=1 00:25:32.051 00:25:32.051 ' 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.051 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:32.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.052 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:32.312 ************************************ 00:25:32.312 START TEST nvmf_shutdown_tc1 00:25:32.312 ************************************ 00:25:32.312 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:32.313 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:40.589 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:40.589 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:40.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:40.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:40.590 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.590 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:40.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:25:40.590 00:25:40.590 --- 10.0.0.2 ping statistics --- 00:25:40.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.590 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:25:40.590 00:25:40.590 --- 10.0.0.1 ping statistics --- 00:25:40.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.590 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=989464 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 989464 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 989464 ']' 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.590 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.590 [2024-11-29 13:09:42.399175] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:25:40.590 [2024-11-29 13:09:42.399242] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.590 [2024-11-29 13:09:42.500027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.590 [2024-11-29 13:09:42.552015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.590 [2024-11-29 13:09:42.552066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.591 [2024-11-29 13:09:42.552075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.591 [2024-11-29 13:09:42.552083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.591 [2024-11-29 13:09:42.552089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.591 [2024-11-29 13:09:42.554474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.591 [2024-11-29 13:09:42.554640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.591 [2024-11-29 13:09:42.554847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.591 [2024-11-29 13:09:42.554847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:40.591 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.591 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:40.591 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.591 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.591 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.852 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.852 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.852 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.852 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.852 [2024-11-29 13:09:43.275674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.852 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.852 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:40.852 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:40.852 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.852 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.853 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:40.853 Malloc1 00:25:40.853 [2024-11-29 13:09:43.411436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.853 Malloc2 00:25:40.853 Malloc3 00:25:40.853 Malloc4 00:25:41.115 Malloc5 00:25:41.115 Malloc6 00:25:41.115 Malloc7 00:25:41.115 Malloc8 00:25:41.115 Malloc9 00:25:41.377 Malloc10 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=989729 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 989729 /var/tmp/bdevperf.sock 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 989729 ']' 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:41.377 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 [2024-11-29 13:09:43.925402] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:25:41.378 [2024-11-29 13:09:43.925475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:41.378 { 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme$subsystem", 00:25:41.378 "trtype": "$TEST_TRANSPORT", 00:25:41.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.378 "adrfam": "ipv4", 00:25:41.378 "trsvcid": "$NVMF_PORT", 00:25:41.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.378 "hdgst": ${hdgst:-false}, 00:25:41.378 "ddgst": ${ddgst:-false} 00:25:41.378 }, 00:25:41.378 "method": "bdev_nvme_attach_controller" 00:25:41.378 } 00:25:41.378 EOF 00:25:41.378 )") 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:41.378 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:41.378 "params": { 00:25:41.378 "name": "Nvme1", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 },{ 00:25:41.379 "params": { 00:25:41.379 "name": "Nvme2", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 },{ 00:25:41.379 "params": { 00:25:41.379 "name": "Nvme3", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 },{ 00:25:41.379 "params": { 00:25:41.379 "name": "Nvme4", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 },{ 00:25:41.379 "params": { 00:25:41.379 "name": "Nvme5", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 },{ 00:25:41.379 "params": { 00:25:41.379 "name": "Nvme6", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 },{ 00:25:41.379 "params": { 00:25:41.379 "name": "Nvme7", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 },{ 00:25:41.379 "params": { 00:25:41.379 "name": "Nvme8", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 },{ 00:25:41.379 "params": { 00:25:41.379 "name": "Nvme9", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 },{ 00:25:41.379 "params": { 00:25:41.379 "name": "Nvme10", 00:25:41.379 "trtype": "tcp", 00:25:41.379 "traddr": "10.0.0.2", 00:25:41.379 "adrfam": "ipv4", 00:25:41.379 "trsvcid": "4420", 00:25:41.379 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:41.379 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:41.379 "hdgst": false, 00:25:41.379 "ddgst": false 00:25:41.379 }, 00:25:41.379 "method": "bdev_nvme_attach_controller" 00:25:41.379 }' 00:25:41.379 [2024-11-29 13:09:44.020608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.641 [2024-11-29 13:09:44.075108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.024 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.024 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:43.024 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:43.024 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.024 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:43.024 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.024 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 989729 00:25:43.024 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:43.024 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:43.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 989729 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 989464 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.966 { 00:25:43.966 "params": { 00:25:43.966 "name": "Nvme$subsystem", 00:25:43.966 "trtype": "$TEST_TRANSPORT", 00:25:43.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.966 "adrfam": "ipv4", 00:25:43.966 "trsvcid": "$NVMF_PORT", 00:25:43.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.966 "hdgst": ${hdgst:-false}, 00:25:43.966 "ddgst": ${ddgst:-false} 00:25:43.966 }, 00:25:43.966 "method": "bdev_nvme_attach_controller" 00:25:43.966 } 00:25:43.966 EOF 00:25:43.966 )") 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.966 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.966 { 00:25:43.966 "params": { 00:25:43.966 "name": "Nvme$subsystem", 00:25:43.966 "trtype": "$TEST_TRANSPORT", 00:25:43.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.967 "adrfam": "ipv4", 00:25:43.967 "trsvcid": "$NVMF_PORT", 00:25:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.967 "hdgst": ${hdgst:-false}, 00:25:43.967 "ddgst": ${ddgst:-false} 00:25:43.967 }, 00:25:43.967 "method": "bdev_nvme_attach_controller" 00:25:43.967 } 00:25:43.967 EOF 00:25:43.967 )") 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.967 { 00:25:43.967 "params": { 00:25:43.967 "name": "Nvme$subsystem", 00:25:43.967 "trtype": "$TEST_TRANSPORT", 00:25:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.967 "adrfam": "ipv4", 00:25:43.967 "trsvcid": "$NVMF_PORT", 00:25:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.967 "hdgst": ${hdgst:-false}, 00:25:43.967 "ddgst": ${ddgst:-false} 00:25:43.967 }, 00:25:43.967 "method": "bdev_nvme_attach_controller" 00:25:43.967 } 00:25:43.967 EOF 00:25:43.967 )") 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.967 { 00:25:43.967 "params": { 00:25:43.967 "name": "Nvme$subsystem", 00:25:43.967 "trtype": "$TEST_TRANSPORT", 00:25:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.967 "adrfam": "ipv4", 00:25:43.967 "trsvcid": "$NVMF_PORT", 00:25:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.967 "hdgst": ${hdgst:-false}, 00:25:43.967 "ddgst": ${ddgst:-false} 00:25:43.967 }, 00:25:43.967 "method": "bdev_nvme_attach_controller" 00:25:43.967 } 00:25:43.967 EOF 00:25:43.967 )") 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.967 { 00:25:43.967 "params": { 00:25:43.967 "name": "Nvme$subsystem", 00:25:43.967 "trtype": "$TEST_TRANSPORT", 00:25:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.967 "adrfam": "ipv4", 00:25:43.967 "trsvcid": "$NVMF_PORT", 00:25:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.967 "hdgst": ${hdgst:-false}, 00:25:43.967 "ddgst": ${ddgst:-false} 00:25:43.967 }, 00:25:43.967 "method": "bdev_nvme_attach_controller" 00:25:43.967 } 00:25:43.967 EOF 00:25:43.967 )") 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.967 { 00:25:43.967 "params": { 00:25:43.967 "name": "Nvme$subsystem", 00:25:43.967 "trtype": "$TEST_TRANSPORT", 00:25:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.967 "adrfam": "ipv4", 00:25:43.967 "trsvcid": "$NVMF_PORT", 00:25:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.967 "hdgst": ${hdgst:-false}, 00:25:43.967 "ddgst": ${ddgst:-false} 00:25:43.967 }, 00:25:43.967 "method": "bdev_nvme_attach_controller" 00:25:43.967 } 00:25:43.967 EOF 00:25:43.967 )") 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.967 { 00:25:43.967 "params": { 00:25:43.967 "name": "Nvme$subsystem", 00:25:43.967 "trtype": "$TEST_TRANSPORT", 00:25:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.967 "adrfam": "ipv4", 00:25:43.967 "trsvcid": "$NVMF_PORT", 00:25:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.967 "hdgst": ${hdgst:-false}, 00:25:43.967 "ddgst": ${ddgst:-false} 00:25:43.967 }, 00:25:43.967 "method": "bdev_nvme_attach_controller" 00:25:43.967 } 00:25:43.967 EOF 00:25:43.967 )") 00:25:43.967 [2024-11-29 13:09:46.366007] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:25:43.967 [2024-11-29 13:09:46.366061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990250 ] 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.967 { 00:25:43.967 "params": { 00:25:43.967 "name": "Nvme$subsystem", 00:25:43.967 "trtype": "$TEST_TRANSPORT", 00:25:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.967 "adrfam": "ipv4", 00:25:43.967 "trsvcid": "$NVMF_PORT", 00:25:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.967 "hdgst": ${hdgst:-false}, 00:25:43.967 "ddgst": ${ddgst:-false} 00:25:43.967 }, 00:25:43.967 "method": "bdev_nvme_attach_controller" 00:25:43.967 } 00:25:43.967 EOF 00:25:43.967 )") 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.967 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.967 { 00:25:43.967 "params": { 00:25:43.967 "name": "Nvme$subsystem", 00:25:43.967 "trtype": "$TEST_TRANSPORT", 00:25:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.967 "adrfam": "ipv4", 00:25:43.967 "trsvcid": "$NVMF_PORT", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.968 "hdgst": ${hdgst:-false}, 00:25:43.968 "ddgst": ${ddgst:-false} 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 } 00:25:43.968 EOF 00:25:43.968 )") 00:25:43.968 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.968 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:43.968 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:43.968 { 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme$subsystem", 00:25:43.968 "trtype": "$TEST_TRANSPORT", 00:25:43.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "$NVMF_PORT", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.968 "hdgst": ${hdgst:-false}, 00:25:43.968 "ddgst": ${ddgst:-false} 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 } 00:25:43.968 EOF 00:25:43.968 )") 00:25:43.968 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:43.968 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:43.968 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:43.968 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme1", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 },{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme2", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 },{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme3", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 },{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme4", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 },{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme5", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 },{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme6", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 },{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme7", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 },{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme8", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 },{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme9", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 },{ 00:25:43.968 "params": { 00:25:43.968 "name": "Nvme10", 00:25:43.968 "trtype": "tcp", 00:25:43.968 "traddr": "10.0.0.2", 00:25:43.968 "adrfam": "ipv4", 00:25:43.968 "trsvcid": "4420", 00:25:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:43.968 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:43.968 "hdgst": false, 00:25:43.968 "ddgst": false 00:25:43.968 }, 00:25:43.968 "method": "bdev_nvme_attach_controller" 00:25:43.968 }' 00:25:43.968 [2024-11-29 13:09:46.453889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.968 [2024-11-29 13:09:46.489948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.354 Running I/O for 1 seconds... 00:25:46.298 1864.00 IOPS, 116.50 MiB/s 00:25:46.298 Latency(us) 00:25:46.298 [2024-11-29T12:09:48.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.298 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme1n1 : 1.13 226.36 14.15 0.00 0.00 279879.68 23265.28 246415.36 00:25:46.298 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme2n1 : 1.13 230.81 14.43 0.00 0.00 268265.94 4314.45 246415.36 00:25:46.298 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme3n1 : 1.10 232.40 14.53 0.00 0.00 262656.00 21408.43 225443.84 00:25:46.298 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme4n1 : 1.09 235.06 14.69 0.00 0.00 254766.29 18350.08 260396.37 00:25:46.298 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme5n1 : 1.10 233.55 14.60 0.00 0.00 251695.79 21080.75 246415.36 00:25:46.298 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme6n1 : 1.21 213.29 13.33 0.00 0.00 262853.34 18677.76 270882.13 00:25:46.298 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme7n1 : 1.17 273.19 17.07 0.00 0.00 208366.93 15947.09 246415.36 00:25:46.298 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme8n1 : 1.18 271.31 16.96 0.00 0.00 206198.44 15728.64 228939.09 00:25:46.298 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme9n1 : 1.16 219.97 13.75 0.00 0.00 248880.00 14745.60 262144.00 00:25:46.298 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:46.298 Verification LBA range: start 0x0 length 0x400 00:25:46.298 Nvme10n1 : 1.18 274.37 17.15 0.00 0.00 195818.30 2771.63 248162.99 00:25:46.298 [2024-11-29T12:09:48.978Z] =================================================================================================================== 00:25:46.298 [2024-11-29T12:09:48.978Z] Total : 2410.32 150.64 0.00 0.00 241111.44 2771.63 270882.13 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.560 rmmod nvme_tcp 00:25:46.560 rmmod nvme_fabrics 00:25:46.560 rmmod nvme_keyring 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:46.560 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 989464 ']' 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 989464 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 989464 ']' 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 989464 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 989464 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 989464' 00:25:46.561 killing process with pid 989464 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 989464 00:25:46.561 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 989464 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.822 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:49.371 00:25:49.371 real 0m16.765s 00:25:49.371 user 0m33.403s 00:25:49.371 sys 0m7.015s 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:49.371 ************************************ 00:25:49.371 END TEST nvmf_shutdown_tc1 00:25:49.371 ************************************ 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:49.371 ************************************ 00:25:49.371 START TEST nvmf_shutdown_tc2 00:25:49.371 ************************************ 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:49.371 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:49.371 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.371 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:49.372 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:49.372 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:25:49.372 00:25:49.372 --- 10.0.0.2 ping statistics --- 00:25:49.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.372 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:25:49.372 00:25:49.372 --- 10.0.0.1 ping statistics --- 00:25:49.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.372 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=991369 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 991369 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 991369 ']' 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.372 13:09:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.372 [2024-11-29 13:09:52.038308] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:25:49.372 [2024-11-29 13:09:52.038357] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.633 [2024-11-29 13:09:52.130062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.633 [2024-11-29 13:09:52.162103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.633 [2024-11-29 13:09:52.162134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.633 [2024-11-29 13:09:52.162139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.633 [2024-11-29 13:09:52.162144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.633 [2024-11-29 13:09:52.162149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.633 [2024-11-29 13:09:52.163662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.633 [2024-11-29 13:09:52.163814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.633 [2024-11-29 13:09:52.163963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.633 [2024-11-29 13:09:52.163964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:50.205 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.205 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:50.205 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.205 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.205 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.205 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.205 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.205 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.205 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.205 [2024-11-29 13:09:52.882487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.465 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.466 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.466 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.466 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.466 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:50.466 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:50.466 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:50.466 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.466 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.466 Malloc1 00:25:50.466 [2024-11-29 13:09:52.999963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.466 Malloc2 00:25:50.466 Malloc3 00:25:50.466 Malloc4 00:25:50.466 Malloc5 00:25:50.726 Malloc6 00:25:50.726 Malloc7 00:25:50.726 Malloc8 00:25:50.726 Malloc9 00:25:50.726 Malloc10 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=991749 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 991749 /var/tmp/bdevperf.sock 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 991749 ']' 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.726 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.726 { 00:25:50.726 "params": { 00:25:50.726 "name": "Nvme$subsystem", 00:25:50.726 "trtype": "$TEST_TRANSPORT", 00:25:50.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.726 "adrfam": "ipv4", 00:25:50.726 "trsvcid": "$NVMF_PORT", 00:25:50.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.727 "hdgst": ${hdgst:-false}, 00:25:50.727 "ddgst": ${ddgst:-false} 00:25:50.727 }, 00:25:50.727 "method": "bdev_nvme_attach_controller" 00:25:50.727 } 00:25:50.727 EOF 00:25:50.727 )") 00:25:50.727 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.987 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.987 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.987 { 00:25:50.987 "params": { 00:25:50.987 "name": "Nvme$subsystem", 00:25:50.987 "trtype": "$TEST_TRANSPORT", 00:25:50.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.987 "adrfam": "ipv4", 00:25:50.987 "trsvcid": "$NVMF_PORT", 00:25:50.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.987 "hdgst": ${hdgst:-false}, 00:25:50.987 "ddgst": ${ddgst:-false} 00:25:50.987 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 } 00:25:50.988 EOF 00:25:50.988 )") 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.988 { 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme$subsystem", 00:25:50.988 "trtype": "$TEST_TRANSPORT", 00:25:50.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "$NVMF_PORT", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.988 "hdgst": ${hdgst:-false}, 00:25:50.988 "ddgst": ${ddgst:-false} 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 } 00:25:50.988 EOF 00:25:50.988 )") 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.988 { 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme$subsystem", 00:25:50.988 "trtype": "$TEST_TRANSPORT", 00:25:50.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "$NVMF_PORT", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.988 "hdgst": ${hdgst:-false}, 00:25:50.988 "ddgst": ${ddgst:-false} 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 } 00:25:50.988 EOF 00:25:50.988 )") 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.988 { 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme$subsystem", 00:25:50.988 "trtype": "$TEST_TRANSPORT", 00:25:50.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "$NVMF_PORT", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.988 "hdgst": ${hdgst:-false}, 00:25:50.988 "ddgst": ${ddgst:-false} 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 } 00:25:50.988 EOF 00:25:50.988 )") 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.988 { 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme$subsystem", 00:25:50.988 "trtype": "$TEST_TRANSPORT", 00:25:50.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "$NVMF_PORT", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.988 "hdgst": ${hdgst:-false}, 00:25:50.988 "ddgst": ${ddgst:-false} 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 } 00:25:50.988 EOF 00:25:50.988 )") 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.988 [2024-11-29 13:09:53.443847] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:25:50.988 [2024-11-29 13:09:53.443901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991749 ] 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.988 { 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme$subsystem", 00:25:50.988 "trtype": "$TEST_TRANSPORT", 00:25:50.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "$NVMF_PORT", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.988 "hdgst": ${hdgst:-false}, 00:25:50.988 "ddgst": ${ddgst:-false} 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 } 00:25:50.988 EOF 00:25:50.988 )") 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.988 { 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme$subsystem", 00:25:50.988 "trtype": "$TEST_TRANSPORT", 00:25:50.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "$NVMF_PORT", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.988 "hdgst": ${hdgst:-false}, 00:25:50.988 "ddgst": ${ddgst:-false} 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 } 00:25:50.988 EOF 00:25:50.988 )") 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.988 { 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme$subsystem", 00:25:50.988 "trtype": "$TEST_TRANSPORT", 00:25:50.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "$NVMF_PORT", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.988 "hdgst": ${hdgst:-false}, 00:25:50.988 "ddgst": ${ddgst:-false} 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 } 00:25:50.988 EOF 00:25:50.988 )") 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:50.988 { 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme$subsystem", 00:25:50.988 "trtype": "$TEST_TRANSPORT", 00:25:50.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "$NVMF_PORT", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.988 "hdgst": ${hdgst:-false}, 00:25:50.988 "ddgst": ${ddgst:-false} 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 } 00:25:50.988 EOF 00:25:50.988 )") 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:25:50.988 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme1", 00:25:50.988 "trtype": "tcp", 00:25:50.988 "traddr": "10.0.0.2", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "4420", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:50.988 "hdgst": false, 00:25:50.988 "ddgst": false 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 },{ 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme2", 00:25:50.988 "trtype": "tcp", 00:25:50.988 "traddr": "10.0.0.2", 00:25:50.988 "adrfam": "ipv4", 00:25:50.988 "trsvcid": "4420", 00:25:50.988 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:50.988 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:50.988 "hdgst": false, 00:25:50.988 "ddgst": false 00:25:50.988 }, 00:25:50.988 "method": "bdev_nvme_attach_controller" 00:25:50.988 },{ 00:25:50.988 "params": { 00:25:50.988 "name": "Nvme3", 00:25:50.989 "trtype": "tcp", 00:25:50.989 "traddr": "10.0.0.2", 00:25:50.989 "adrfam": "ipv4", 00:25:50.989 "trsvcid": "4420", 00:25:50.989 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:50.989 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:50.989 "hdgst": false, 00:25:50.989 "ddgst": false 00:25:50.989 }, 00:25:50.989 "method": "bdev_nvme_attach_controller" 00:25:50.989 },{ 00:25:50.989 "params": { 00:25:50.989 "name": "Nvme4", 00:25:50.989 "trtype": "tcp", 00:25:50.989 "traddr": "10.0.0.2", 00:25:50.989 "adrfam": "ipv4", 00:25:50.989 "trsvcid": "4420", 00:25:50.989 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:50.989 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:50.989 "hdgst": false, 00:25:50.989 "ddgst": false 00:25:50.989 }, 00:25:50.989 "method": "bdev_nvme_attach_controller" 00:25:50.989 },{ 00:25:50.989 "params": { 00:25:50.989 "name": "Nvme5", 00:25:50.989 "trtype": "tcp", 00:25:50.989 "traddr": "10.0.0.2", 00:25:50.989 "adrfam": "ipv4", 00:25:50.989 "trsvcid": "4420", 00:25:50.989 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:50.989 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:50.989 "hdgst": false, 00:25:50.989 "ddgst": false 00:25:50.989 }, 00:25:50.989 "method": "bdev_nvme_attach_controller" 00:25:50.989 },{ 00:25:50.989 "params": { 00:25:50.989 "name": "Nvme6", 00:25:50.989 "trtype": "tcp", 00:25:50.989 "traddr": "10.0.0.2", 00:25:50.989 "adrfam": "ipv4", 00:25:50.989 "trsvcid": "4420", 00:25:50.989 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:50.989 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:50.989 "hdgst": false, 00:25:50.989 "ddgst": false 00:25:50.989 }, 00:25:50.989 "method": "bdev_nvme_attach_controller" 00:25:50.989 },{ 00:25:50.989 "params": { 00:25:50.989 "name": "Nvme7", 00:25:50.989 "trtype": "tcp", 00:25:50.989 "traddr": "10.0.0.2", 00:25:50.989 "adrfam": "ipv4", 00:25:50.989 "trsvcid": "4420", 00:25:50.989 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:50.989 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:50.989 "hdgst": false, 00:25:50.989 "ddgst": false 00:25:50.989 }, 00:25:50.989 "method": "bdev_nvme_attach_controller" 00:25:50.989 },{ 00:25:50.989 "params": { 00:25:50.989 "name": "Nvme8", 00:25:50.989 "trtype": "tcp", 00:25:50.989 "traddr": "10.0.0.2", 00:25:50.989 "adrfam": "ipv4", 00:25:50.989 "trsvcid": "4420", 00:25:50.989 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:50.989 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:50.989 "hdgst": false, 00:25:50.989 "ddgst": false 00:25:50.989 }, 00:25:50.989 "method": "bdev_nvme_attach_controller" 00:25:50.989 },{ 00:25:50.989 "params": { 00:25:50.989 "name": "Nvme9", 00:25:50.989 "trtype": "tcp", 00:25:50.989 "traddr": "10.0.0.2", 00:25:50.989 "adrfam": "ipv4", 00:25:50.989 "trsvcid": "4420", 00:25:50.989 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:50.989 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:50.989 "hdgst": false, 00:25:50.989 "ddgst": false 00:25:50.989 }, 00:25:50.989 "method": "bdev_nvme_attach_controller" 00:25:50.989 },{ 00:25:50.989 "params": { 00:25:50.989 "name": "Nvme10", 00:25:50.989 "trtype": "tcp", 00:25:50.989 "traddr": "10.0.0.2", 00:25:50.989 "adrfam": "ipv4", 00:25:50.989 "trsvcid": "4420", 00:25:50.989 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:50.989 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:50.989 "hdgst": false, 00:25:50.989 "ddgst": false 00:25:50.989 }, 00:25:50.989 "method": "bdev_nvme_attach_controller" 00:25:50.989 }' 00:25:50.989 [2024-11-29 13:09:53.531887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.989 [2024-11-29 13:09:53.567957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.374 Running I/O for 10 seconds... 00:25:52.374 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.374 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:52.374 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:52.374 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.374 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:52.634 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:52.893 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:53.154 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:53.154 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 991749 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 991749 ']' 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 991749 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.155 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991749 00:25:53.415 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:53.415 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:53.415 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991749' 00:25:53.415 killing process with pid 991749 00:25:53.415 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 991749 00:25:53.415 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 991749 00:25:53.415 Received shutdown signal, test time was about 0.973039 seconds 00:25:53.415 00:25:53.415 Latency(us) 00:25:53.415 [2024-11-29T12:09:56.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.415 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme1n1 : 0.97 263.33 16.46 0.00 0.00 240069.55 16493.23 246415.36 00:25:53.415 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme2n1 : 0.96 266.27 16.64 0.00 0.00 232293.97 16384.00 241172.48 00:25:53.415 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme3n1 : 0.94 205.13 12.82 0.00 0.00 295220.91 17694.72 270882.13 00:25:53.415 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme4n1 : 0.95 270.80 16.93 0.00 0.00 218785.07 12233.39 237677.23 00:25:53.415 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme5n1 : 0.97 264.58 16.54 0.00 0.00 219487.79 18896.21 249910.61 00:25:53.415 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme6n1 : 0.94 204.29 12.77 0.00 0.00 276867.13 15182.51 248162.99 00:25:53.415 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme7n1 : 0.97 265.27 16.58 0.00 0.00 209132.16 15073.28 246415.36 00:25:53.415 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme8n1 : 0.96 267.15 16.70 0.00 0.00 202643.84 15947.09 255153.49 00:25:53.415 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme9n1 : 0.95 201.26 12.58 0.00 0.00 262484.48 19770.03 269134.51 00:25:53.415 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:53.415 Verification LBA range: start 0x0 length 0x400 00:25:53.415 Nvme10n1 : 0.95 202.13 12.63 0.00 0.00 254737.07 39540.05 248162.99 00:25:53.415 [2024-11-29T12:09:56.095Z] =================================================================================================================== 00:25:53.415 [2024-11-29T12:09:56.095Z] Total : 2410.22 150.64 0.00 0.00 237710.51 12233.39 270882.13 00:25:53.415 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 991369 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.800 rmmod nvme_tcp 00:25:54.800 rmmod nvme_fabrics 00:25:54.800 rmmod nvme_keyring 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 991369 ']' 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 991369 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 991369 ']' 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 991369 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991369 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991369' 00:25:54.800 killing process with pid 991369 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 991369 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 991369 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.800 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.346 00:25:57.346 real 0m7.911s 00:25:57.346 user 0m23.927s 00:25:57.346 sys 0m1.331s 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:57.346 ************************************ 00:25:57.346 END TEST nvmf_shutdown_tc2 00:25:57.346 ************************************ 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:57.346 ************************************ 00:25:57.346 START TEST nvmf_shutdown_tc3 00:25:57.346 ************************************ 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:57.346 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:57.347 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:57.347 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:57.347 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.347 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:57.348 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:25:57.348 00:25:57.348 --- 10.0.0.2 ping statistics --- 00:25:57.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.348 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:25:57.348 00:25:57.348 --- 10.0.0.1 ping statistics --- 00:25:57.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.348 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=993168 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 993168 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 993168 ']' 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.348 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:57.610 [2024-11-29 13:10:00.062138] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:25:57.610 [2024-11-29 13:10:00.062206] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.610 [2024-11-29 13:10:00.149592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.610 [2024-11-29 13:10:00.180898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.610 [2024-11-29 13:10:00.180929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.610 [2024-11-29 13:10:00.180935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.610 [2024-11-29 13:10:00.180940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.610 [2024-11-29 13:10:00.180945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.610 [2024-11-29 13:10:00.182221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.610 [2024-11-29 13:10:00.182489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.610 [2024-11-29 13:10:00.182639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.610 [2024-11-29 13:10:00.182640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.550 [2024-11-29 13:10:00.920255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.550 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.550 Malloc1 00:25:58.550 [2024-11-29 13:10:01.032686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.550 Malloc2 00:25:58.550 Malloc3 00:25:58.550 Malloc4 00:25:58.550 Malloc5 00:25:58.550 Malloc6 00:25:58.812 Malloc7 00:25:58.812 Malloc8 00:25:58.812 Malloc9 00:25:58.812 Malloc10 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=993415 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 993415 /var/tmp/bdevperf.sock 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 993415 ']' 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:58.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.812 { 00:25:58.812 "params": { 00:25:58.812 "name": "Nvme$subsystem", 00:25:58.812 "trtype": "$TEST_TRANSPORT", 00:25:58.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.812 "adrfam": "ipv4", 00:25:58.812 "trsvcid": "$NVMF_PORT", 00:25:58.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.812 "hdgst": ${hdgst:-false}, 00:25:58.812 "ddgst": ${ddgst:-false} 00:25:58.812 }, 00:25:58.812 "method": "bdev_nvme_attach_controller" 00:25:58.812 } 00:25:58.812 EOF 00:25:58.812 )") 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.812 { 00:25:58.812 "params": { 00:25:58.812 "name": "Nvme$subsystem", 00:25:58.812 "trtype": "$TEST_TRANSPORT", 00:25:58.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.812 "adrfam": "ipv4", 00:25:58.812 "trsvcid": "$NVMF_PORT", 00:25:58.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.812 "hdgst": ${hdgst:-false}, 00:25:58.812 "ddgst": ${ddgst:-false} 00:25:58.812 }, 00:25:58.812 "method": "bdev_nvme_attach_controller" 00:25:58.812 } 00:25:58.812 EOF 00:25:58.812 )") 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.812 { 00:25:58.812 "params": { 00:25:58.812 "name": "Nvme$subsystem", 00:25:58.812 "trtype": "$TEST_TRANSPORT", 00:25:58.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.812 "adrfam": "ipv4", 00:25:58.812 "trsvcid": "$NVMF_PORT", 00:25:58.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.812 "hdgst": ${hdgst:-false}, 00:25:58.812 "ddgst": ${ddgst:-false} 00:25:58.812 }, 00:25:58.812 "method": "bdev_nvme_attach_controller" 00:25:58.812 } 00:25:58.812 EOF 00:25:58.812 )") 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.812 { 00:25:58.812 "params": { 00:25:58.812 "name": "Nvme$subsystem", 00:25:58.812 "trtype": "$TEST_TRANSPORT", 00:25:58.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.812 "adrfam": "ipv4", 00:25:58.812 "trsvcid": "$NVMF_PORT", 00:25:58.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.812 "hdgst": ${hdgst:-false}, 00:25:58.812 "ddgst": ${ddgst:-false} 00:25:58.812 }, 00:25:58.812 "method": "bdev_nvme_attach_controller" 00:25:58.812 } 00:25:58.812 EOF 00:25:58.812 )") 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.812 { 00:25:58.812 "params": { 00:25:58.812 "name": "Nvme$subsystem", 00:25:58.812 "trtype": "$TEST_TRANSPORT", 00:25:58.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.812 "adrfam": "ipv4", 00:25:58.812 "trsvcid": "$NVMF_PORT", 00:25:58.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.812 "hdgst": ${hdgst:-false}, 00:25:58.812 "ddgst": ${ddgst:-false} 00:25:58.812 }, 00:25:58.812 "method": "bdev_nvme_attach_controller" 00:25:58.812 } 00:25:58.812 EOF 00:25:58.812 )") 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.812 { 00:25:58.812 "params": { 00:25:58.812 "name": "Nvme$subsystem", 00:25:58.812 "trtype": "$TEST_TRANSPORT", 00:25:58.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.812 "adrfam": "ipv4", 00:25:58.812 "trsvcid": "$NVMF_PORT", 00:25:58.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.812 "hdgst": ${hdgst:-false}, 00:25:58.812 "ddgst": ${ddgst:-false} 00:25:58.812 }, 00:25:58.812 "method": "bdev_nvme_attach_controller" 00:25:58.812 } 00:25:58.812 EOF 00:25:58.812 )") 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:58.812 [2024-11-29 13:10:01.483820] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:25:58.812 [2024-11-29 13:10:01.483876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993415 ] 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:58.812 { 00:25:58.812 "params": { 00:25:58.812 "name": "Nvme$subsystem", 00:25:58.812 "trtype": "$TEST_TRANSPORT", 00:25:58.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.812 "adrfam": "ipv4", 00:25:58.812 "trsvcid": "$NVMF_PORT", 00:25:58.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.812 "hdgst": ${hdgst:-false}, 00:25:58.812 "ddgst": ${ddgst:-false} 00:25:58.812 }, 00:25:58.812 "method": "bdev_nvme_attach_controller" 00:25:58.812 } 00:25:58.812 EOF 00:25:58.812 )") 00:25:58.812 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.073 { 00:25:59.073 "params": { 00:25:59.073 "name": "Nvme$subsystem", 00:25:59.073 "trtype": "$TEST_TRANSPORT", 00:25:59.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.073 "adrfam": "ipv4", 00:25:59.073 "trsvcid": "$NVMF_PORT", 00:25:59.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.073 "hdgst": ${hdgst:-false}, 00:25:59.073 "ddgst": ${ddgst:-false} 00:25:59.073 }, 00:25:59.073 "method": "bdev_nvme_attach_controller" 00:25:59.073 } 00:25:59.073 EOF 00:25:59.073 )") 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.073 { 00:25:59.073 "params": { 00:25:59.073 "name": "Nvme$subsystem", 00:25:59.073 "trtype": "$TEST_TRANSPORT", 00:25:59.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.073 "adrfam": "ipv4", 00:25:59.073 "trsvcid": "$NVMF_PORT", 00:25:59.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.073 "hdgst": ${hdgst:-false}, 00:25:59.073 "ddgst": ${ddgst:-false} 00:25:59.073 }, 00:25:59.073 "method": "bdev_nvme_attach_controller" 00:25:59.073 } 00:25:59.073 EOF 00:25:59.073 )") 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.073 { 00:25:59.073 "params": { 00:25:59.073 "name": "Nvme$subsystem", 00:25:59.073 "trtype": "$TEST_TRANSPORT", 00:25:59.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.073 "adrfam": "ipv4", 00:25:59.073 "trsvcid": "$NVMF_PORT", 00:25:59.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.073 "hdgst": ${hdgst:-false}, 00:25:59.073 "ddgst": ${ddgst:-false} 00:25:59.073 }, 00:25:59.073 "method": "bdev_nvme_attach_controller" 00:25:59.073 } 00:25:59.073 EOF 00:25:59.073 )") 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:25:59.073 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:59.073 "params": { 00:25:59.073 "name": "Nvme1", 00:25:59.073 "trtype": "tcp", 00:25:59.073 "traddr": "10.0.0.2", 00:25:59.073 "adrfam": "ipv4", 00:25:59.073 "trsvcid": "4420", 00:25:59.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.073 "hdgst": false, 00:25:59.073 "ddgst": false 00:25:59.073 }, 00:25:59.073 "method": "bdev_nvme_attach_controller" 00:25:59.073 },{ 00:25:59.073 "params": { 00:25:59.073 "name": "Nvme2", 00:25:59.073 "trtype": "tcp", 00:25:59.073 "traddr": "10.0.0.2", 00:25:59.073 "adrfam": "ipv4", 00:25:59.073 "trsvcid": "4420", 00:25:59.073 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:59.073 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:59.073 "hdgst": false, 00:25:59.073 "ddgst": false 00:25:59.073 }, 00:25:59.073 "method": "bdev_nvme_attach_controller" 00:25:59.073 },{ 00:25:59.073 "params": { 00:25:59.073 "name": "Nvme3", 00:25:59.073 "trtype": "tcp", 00:25:59.073 "traddr": "10.0.0.2", 00:25:59.073 "adrfam": "ipv4", 00:25:59.073 "trsvcid": "4420", 00:25:59.073 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:59.073 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:59.073 "hdgst": false, 00:25:59.073 "ddgst": false 00:25:59.073 }, 00:25:59.073 "method": "bdev_nvme_attach_controller" 00:25:59.073 },{ 00:25:59.073 "params": { 00:25:59.073 "name": "Nvme4", 00:25:59.073 "trtype": "tcp", 00:25:59.073 "traddr": "10.0.0.2", 00:25:59.073 "adrfam": "ipv4", 00:25:59.074 "trsvcid": "4420", 00:25:59.074 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:59.074 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:59.074 "hdgst": false, 00:25:59.074 "ddgst": false 00:25:59.074 }, 00:25:59.074 "method": "bdev_nvme_attach_controller" 00:25:59.074 },{ 00:25:59.074 "params": { 00:25:59.074 "name": "Nvme5", 00:25:59.074 "trtype": "tcp", 00:25:59.074 "traddr": "10.0.0.2", 00:25:59.074 "adrfam": "ipv4", 00:25:59.074 "trsvcid": "4420", 00:25:59.074 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:59.074 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:59.074 "hdgst": false, 00:25:59.074 "ddgst": false 00:25:59.074 }, 00:25:59.074 "method": "bdev_nvme_attach_controller" 00:25:59.074 },{ 00:25:59.074 "params": { 00:25:59.074 "name": "Nvme6", 00:25:59.074 "trtype": "tcp", 00:25:59.074 "traddr": "10.0.0.2", 00:25:59.074 "adrfam": "ipv4", 00:25:59.074 "trsvcid": "4420", 00:25:59.074 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:59.074 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:59.074 "hdgst": false, 00:25:59.074 "ddgst": false 00:25:59.074 }, 00:25:59.074 "method": "bdev_nvme_attach_controller" 00:25:59.074 },{ 00:25:59.074 "params": { 00:25:59.074 "name": "Nvme7", 00:25:59.074 "trtype": "tcp", 00:25:59.074 "traddr": "10.0.0.2", 00:25:59.074 "adrfam": "ipv4", 00:25:59.074 "trsvcid": "4420", 00:25:59.074 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:59.074 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:59.074 "hdgst": false, 00:25:59.074 "ddgst": false 00:25:59.074 }, 00:25:59.074 "method": "bdev_nvme_attach_controller" 00:25:59.074 },{ 00:25:59.074 "params": { 00:25:59.074 "name": "Nvme8", 00:25:59.074 "trtype": "tcp", 00:25:59.074 "traddr": "10.0.0.2", 00:25:59.074 "adrfam": "ipv4", 00:25:59.074 "trsvcid": "4420", 00:25:59.074 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:59.074 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:59.074 "hdgst": false, 00:25:59.074 "ddgst": false 00:25:59.074 }, 00:25:59.074 "method": "bdev_nvme_attach_controller" 00:25:59.074 },{ 00:25:59.074 "params": { 00:25:59.074 "name": "Nvme9", 00:25:59.074 "trtype": "tcp", 00:25:59.074 "traddr": "10.0.0.2", 00:25:59.074 "adrfam": "ipv4", 00:25:59.074 "trsvcid": "4420", 00:25:59.074 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:59.074 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:59.074 "hdgst": false, 00:25:59.074 "ddgst": false 00:25:59.074 }, 00:25:59.074 "method": "bdev_nvme_attach_controller" 00:25:59.074 },{ 00:25:59.074 "params": { 00:25:59.074 "name": "Nvme10", 00:25:59.074 "trtype": "tcp", 00:25:59.074 "traddr": "10.0.0.2", 00:25:59.074 "adrfam": "ipv4", 00:25:59.074 "trsvcid": "4420", 00:25:59.074 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:59.074 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:59.074 "hdgst": false, 00:25:59.074 "ddgst": false 00:25:59.074 }, 00:25:59.074 "method": "bdev_nvme_attach_controller" 00:25:59.074 }' 00:25:59.074 [2024-11-29 13:10:01.572524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.074 [2024-11-29 13:10:01.608846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.459 Running I/O for 10 seconds... 00:26:00.460 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.460 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:00.460 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:00.460 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.460 13:10:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.460 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:00.720 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.721 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:00.721 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:00.721 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:00.981 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 993168 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 993168 ']' 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 993168 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 993168 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 993168' 00:26:01.256 killing process with pid 993168 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 993168 00:26:01.256 13:10:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 993168 00:26:01.256 [2024-11-29 13:10:03.851717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.256 [2024-11-29 13:10:03.851912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.851997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.852065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12859f0 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.257 [2024-11-29 13:10:03.853310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.853315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.853320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.853325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257d00 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.854969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12581d0 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.258 [2024-11-29 13:10:03.856387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.856567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258b90 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.259 [2024-11-29 13:10:03.857573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.857578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.857583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.857588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.857593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.857598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.857602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.857607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.857612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259060 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.858685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259530 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.260 [2024-11-29 13:10:03.859322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.859849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.859863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.859871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.859880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.859888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.859896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.859904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.859911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cedcc0 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.859940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.859950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.859958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.859965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.859974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.859981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.859989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.859996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c05610 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.860044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155c00 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.860185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2145fa0 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.860290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21182b0 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.860384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2112920 is same with the state(6) to be set 00:26:01.261 [2024-11-29 13:10:03.860475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.261 [2024-11-29 13:10:03.860508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.261 [2024-11-29 13:10:03.860515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.262 [2024-11-29 13:10:03.860530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3190 is same with the state(6) to be set 00:26:01.262 [2024-11-29 13:10:03.860561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.262 [2024-11-29 13:10:03.860570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.262 [2024-11-29 13:10:03.860586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.262 [2024-11-29 13:10:03.860601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.262 [2024-11-29 13:10:03.860623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ced850 is same with the state(6) to be set 00:26:01.262 [2024-11-29 13:10:03.860651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.262 [2024-11-29 13:10:03.860660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.262 [2024-11-29 13:10:03.860675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.262 [2024-11-29 13:10:03.860690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.262 [2024-11-29 13:10:03.860705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.860713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebfc0 is same with the state(6) to be set 00:26:01.262 [2024-11-29 13:10:03.861374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.262 [2024-11-29 13:10:03.861931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-29 13:10:03.861938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.861947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.861959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.861969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.861976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.861985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.861993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:01.263 [2024-11-29 13:10:03.862644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.263 [2024-11-29 13:10:03.862830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.263 [2024-11-29 13:10:03.862840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.862857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.862873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.862890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.862906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.862923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.862940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.862958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.862974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.862991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.862998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.863218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.863229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.869663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.869811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a20 is same with the state(6) to be set 00:26:01.264 [2024-11-29 13:10:03.879695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.879729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.879740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.879748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.879758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.879766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.879776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.879783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.879793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.264 [2024-11-29 13:10:03.879800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.264 [2024-11-29 13:10:03.879810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.879983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.879990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.880246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.880255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eeee0 is same with the state(6) to be set 00:26:01.265 [2024-11-29 13:10:03.880967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cedcc0 (9): Bad file descriptor 00:26:01.265 [2024-11-29 13:10:03.880997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c05610 (9): Bad file descriptor 00:26:01.265 [2024-11-29 13:10:03.881028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.265 [2024-11-29 13:10:03.881038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.881046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.265 [2024-11-29 13:10:03.881053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.881062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.265 [2024-11-29 13:10:03.881069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.881077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.265 [2024-11-29 13:10:03.881084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.881092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2145810 is same with the state(6) to be set 00:26:01.265 [2024-11-29 13:10:03.881111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155c00 (9): Bad file descriptor 00:26:01.265 [2024-11-29 13:10:03.881124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2145fa0 (9): Bad file descriptor 00:26:01.265 [2024-11-29 13:10:03.881141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21182b0 (9): Bad file descriptor 00:26:01.265 [2024-11-29 13:10:03.881164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2112920 (9): Bad file descriptor 00:26:01.265 [2024-11-29 13:10:03.881182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3190 (9): Bad file descriptor 00:26:01.265 [2024-11-29 13:10:03.881195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ced850 (9): Bad file descriptor 00:26:01.265 [2024-11-29 13:10:03.881211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cebfc0 (9): Bad file descriptor 00:26:01.265 [2024-11-29 13:10:03.883992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.884027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.884045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.884061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.884078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.884094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.884111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.884128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.884145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.265 [2024-11-29 13:10:03.884168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.265 [2024-11-29 13:10:03.884175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.266 [2024-11-29 13:10:03.884899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.266 [2024-11-29 13:10:03.884906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.884916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.884924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.884933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.884940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.884949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.884956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.884966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.884973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.884982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.884990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.884999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.885006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.885015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.885023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.885032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.885041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.885050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.885057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.885066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.885074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.885083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.885090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.885200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:01.267 [2024-11-29 13:10:03.885223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:01.267 [2024-11-29 13:10:03.887145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.267 [2024-11-29 13:10:03.887176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cedcc0 with addr=10.0.0.2, port=4420 00:26:01.267 [2024-11-29 13:10:03.887186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cedcc0 is same with the state(6) to be set 00:26:01.267 [2024-11-29 13:10:03.887467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.267 [2024-11-29 13:10:03.887506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3190 with addr=10.0.0.2, port=4420 00:26:01.267 [2024-11-29 13:10:03.887518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3190 is same with the state(6) to be set 00:26:01.267 [2024-11-29 13:10:03.888214] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.267 [2024-11-29 13:10:03.888264] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.267 [2024-11-29 13:10:03.888302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.267 [2024-11-29 13:10:03.888785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.267 [2024-11-29 13:10:03.888796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.888984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.888991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.268 [2024-11-29 13:10:03.889408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.268 [2024-11-29 13:10:03.889417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f25a0 is same with the state(6) to be set 00:26:01.268 [2024-11-29 13:10:03.889551] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.268 [2024-11-29 13:10:03.889576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:01.268 [2024-11-29 13:10:03.889592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2145810 (9): Bad file descriptor 00:26:01.268 [2024-11-29 13:10:03.889605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cedcc0 (9): Bad file descriptor 00:26:01.268 [2024-11-29 13:10:03.889615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3190 (9): Bad file descriptor 00:26:01.268 [2024-11-29 13:10:03.889677] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.269 [2024-11-29 13:10:03.889719] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.269 [2024-11-29 13:10:03.891070] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:01.269 [2024-11-29 13:10:03.891367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:01.269 [2024-11-29 13:10:03.891402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:01.269 [2024-11-29 13:10:03.891411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:01.269 [2024-11-29 13:10:03.891420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:01.269 [2024-11-29 13:10:03.891428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:01.269 [2024-11-29 13:10:03.891437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:01.269 [2024-11-29 13:10:03.891443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:01.269 [2024-11-29 13:10:03.891450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:01.269 [2024-11-29 13:10:03.891456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:01.269 [2024-11-29 13:10:03.891926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.269 [2024-11-29 13:10:03.891943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2145810 with addr=10.0.0.2, port=4420 00:26:01.269 [2024-11-29 13:10:03.891951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2145810 is same with the state(6) to be set 00:26:01.269 [2024-11-29 13:10:03.892116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.269 [2024-11-29 13:10:03.892126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c05610 with addr=10.0.0.2, port=4420 00:26:01.269 [2024-11-29 13:10:03.892133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c05610 is same with the state(6) to be set 00:26:01.269 [2024-11-29 13:10:03.892164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.269 [2024-11-29 13:10:03.892767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.269 [2024-11-29 13:10:03.892774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.892987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.892994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.893254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.893262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef2bc0 is same with the state(6) to be set 00:26:01.270 [2024-11-29 13:10:03.894614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.270 [2024-11-29 13:10:03.894848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.270 [2024-11-29 13:10:03.894855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.894864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.894872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.894881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.894888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.894898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.894905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.894917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.894924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.894934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.894941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.894950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.894958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.894967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.894975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.894984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.894991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.271 [2024-11-29 13:10:03.895592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.271 [2024-11-29 13:10:03.895601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.895609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.895619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.895626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.895636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.895643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.895653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.895660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.895669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.895676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.895686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.895693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.895703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.895710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.895719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef3cb0 is same with the state(6) to be set 00:26:01.272 [2024-11-29 13:10:03.897076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.272 [2024-11-29 13:10:03.897592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.272 [2024-11-29 13:10:03.897601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.897983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.897994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.898179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.898187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f00e0 is same with the state(6) to be set 00:26:01.273 [2024-11-29 13:10:03.899529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.899542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.899557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.899565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.899575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.899582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.899592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.899599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.899609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.899616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.899626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.899633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.899643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.899650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.899660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.899668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.899677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.273 [2024-11-29 13:10:03.899685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.273 [2024-11-29 13:10:03.899694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.899987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.899995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.274 [2024-11-29 13:10:03.900425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.274 [2024-11-29 13:10:03.900435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.900635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.900644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f12e0 is same with the state(6) to be set 00:26:01.275 [2024-11-29 13:10:03.902262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.275 [2024-11-29 13:10:03.902789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.275 [2024-11-29 13:10:03.902799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.902988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.902995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.903357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.903365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f3860 is same with the state(6) to be set 00:26:01.276 [2024-11-29 13:10:03.904717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.904731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.904742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.904749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.904760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.904767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.904777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.904784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.904794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.276 [2024-11-29 13:10:03.904801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.276 [2024-11-29 13:10:03.904811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.904984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.904991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.277 [2024-11-29 13:10:03.905539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.277 [2024-11-29 13:10:03.905548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.278 [2024-11-29 13:10:03.905810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.278 [2024-11-29 13:10:03.905818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2e470 is same with the state(6) to be set 00:26:01.278 [2024-11-29 13:10:03.907416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:01.278 [2024-11-29 13:10:03.907445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:01.278 [2024-11-29 13:10:03.907457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:01.278 [2024-11-29 13:10:03.907470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:01.278 [2024-11-29 13:10:03.907517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2145810 (9): Bad file descriptor 00:26:01.278 [2024-11-29 13:10:03.907529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c05610 (9): Bad file descriptor 00:26:01.278 [2024-11-29 13:10:03.907567] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:01.278 [2024-11-29 13:10:03.907580] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:01.278 [2024-11-29 13:10:03.907597] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:01.278 [2024-11-29 13:10:03.907607] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:01.278 [2024-11-29 13:10:03.907688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:01.540 task offset: 24576 on job bdev=Nvme1n1 fails 00:26:01.540 00:26:01.540 Latency(us) 00:26:01.540 [2024-11-29T12:10:04.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.540 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme1n1 ended in about 0.96 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme1n1 : 0.96 199.73 12.48 66.58 0.00 237583.79 24248.32 251658.24 00:26:01.540 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme2n1 ended in about 0.97 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme2n1 : 0.97 141.78 8.86 65.75 0.00 298914.45 18022.40 298844.16 00:26:01.540 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme3n1 ended in about 0.98 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme3n1 : 0.98 196.76 12.30 65.59 0.00 231490.99 16820.91 230686.72 00:26:01.540 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme4n1 ended in about 0.96 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme4n1 : 0.96 199.46 12.47 66.49 0.00 223382.19 23811.41 234181.97 00:26:01.540 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme5n1 ended in about 0.98 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme5n1 : 0.98 130.84 8.18 65.42 0.00 296659.63 14199.47 253405.87 00:26:01.540 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme6n1 ended in about 0.98 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme6n1 : 0.98 130.52 8.16 65.26 0.00 290960.50 15510.19 276125.01 00:26:01.540 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme7n1 ended in about 0.97 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme7n1 : 0.97 202.09 12.63 65.99 0.00 207352.33 20862.29 253405.87 00:26:01.540 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme8n1 ended in about 0.98 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme8n1 : 0.98 195.24 12.20 65.08 0.00 209145.39 13598.72 248162.99 00:26:01.540 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme9n1 ended in about 0.97 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme9n1 : 0.97 204.02 12.75 66.28 0.00 196006.42 4942.51 251658.24 00:26:01.540 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:01.540 Job: Nvme10n1 ended in about 0.99 seconds with error 00:26:01.540 Verification LBA range: start 0x0 length 0x400 00:26:01.540 Nvme10n1 : 0.99 129.83 8.11 64.92 0.00 266974.72 23592.96 248162.99 00:26:01.540 [2024-11-29T12:10:04.220Z] =================================================================================================================== 00:26:01.540 [2024-11-29T12:10:04.220Z] Total : 1730.28 108.14 657.35 0.00 241214.99 4942.51 298844.16 00:26:01.540 [2024-11-29 13:10:03.933866] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:01.540 [2024-11-29 13:10:03.933916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:01.540 [2024-11-29 13:10:03.934379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.540 [2024-11-29 13:10:03.934399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ced850 with addr=10.0.0.2, port=4420 00:26:01.540 [2024-11-29 13:10:03.934409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ced850 is same with the state(6) to be set 00:26:01.540 [2024-11-29 13:10:03.934759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.540 [2024-11-29 13:10:03.934770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cebfc0 with addr=10.0.0.2, port=4420 00:26:01.540 [2024-11-29 13:10:03.934778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebfc0 is same with the state(6) to be set 00:26:01.540 [2024-11-29 13:10:03.935106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.540 [2024-11-29 13:10:03.935116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2112920 with addr=10.0.0.2, port=4420 00:26:01.540 [2024-11-29 13:10:03.935124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2112920 is same with the state(6) to be set 00:26:01.540 [2024-11-29 13:10:03.935331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.540 [2024-11-29 13:10:03.935342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21182b0 with addr=10.0.0.2, port=4420 00:26:01.540 [2024-11-29 13:10:03.935356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21182b0 is same with the state(6) to be set 00:26:01.540 [2024-11-29 13:10:03.935364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:01.540 [2024-11-29 13:10:03.935371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:01.540 [2024-11-29 13:10:03.935380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:01.540 [2024-11-29 13:10:03.935389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:01.540 [2024-11-29 13:10:03.935398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:01.540 [2024-11-29 13:10:03.935405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:01.540 [2024-11-29 13:10:03.935412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:01.540 [2024-11-29 13:10:03.935418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:01.540 [2024-11-29 13:10:03.937054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:01.540 [2024-11-29 13:10:03.937069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:01.540 [2024-11-29 13:10:03.937285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.540 [2024-11-29 13:10:03.937299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2145fa0 with addr=10.0.0.2, port=4420 00:26:01.540 [2024-11-29 13:10:03.937307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2145fa0 is same with the state(6) to be set 00:26:01.540 [2024-11-29 13:10:03.937533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.540 [2024-11-29 13:10:03.937543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155c00 with addr=10.0.0.2, port=4420 00:26:01.540 [2024-11-29 13:10:03.937550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155c00 is same with the state(6) to be set 00:26:01.540 [2024-11-29 13:10:03.937563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ced850 (9): Bad file descriptor 00:26:01.540 [2024-11-29 13:10:03.937574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cebfc0 (9): Bad file descriptor 00:26:01.540 [2024-11-29 13:10:03.937584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2112920 (9): Bad file descriptor 00:26:01.540 [2024-11-29 13:10:03.937593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21182b0 (9): Bad file descriptor 00:26:01.540 [2024-11-29 13:10:03.937635] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:01.540 [2024-11-29 13:10:03.937650] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:01.540 [2024-11-29 13:10:03.937660] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:01.540 [2024-11-29 13:10:03.937673] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:26:01.540 [2024-11-29 13:10:03.938220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.540 [2024-11-29 13:10:03.938238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3190 with addr=10.0.0.2, port=4420 00:26:01.540 [2024-11-29 13:10:03.938246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3190 is same with the state(6) to be set 00:26:01.540 [2024-11-29 13:10:03.938529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.540 [2024-11-29 13:10:03.938539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cedcc0 with addr=10.0.0.2, port=4420 00:26:01.540 [2024-11-29 13:10:03.938546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cedcc0 is same with the state(6) to be set 00:26:01.540 [2024-11-29 13:10:03.938556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2145fa0 (9): Bad file descriptor 00:26:01.540 [2024-11-29 13:10:03.938565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155c00 (9): Bad file descriptor 00:26:01.540 [2024-11-29 13:10:03.938574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:01.540 [2024-11-29 13:10:03.938580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.938588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.938596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:01.541 [2024-11-29 13:10:03.938604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:01.541 [2024-11-29 13:10:03.938610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.938617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.938623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:01.541 [2024-11-29 13:10:03.938631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:01.541 [2024-11-29 13:10:03.938637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.938643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.938650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:01.541 [2024-11-29 13:10:03.938657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:01.541 [2024-11-29 13:10:03.938664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.938670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.938677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:01.541 [2024-11-29 13:10:03.939444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:01.541 [2024-11-29 13:10:03.939461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:01.541 [2024-11-29 13:10:03.939490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3190 (9): Bad file descriptor 00:26:01.541 [2024-11-29 13:10:03.939502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cedcc0 (9): Bad file descriptor 00:26:01.541 [2024-11-29 13:10:03.939511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:01.541 [2024-11-29 13:10:03.939519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.939527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.939535] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:01.541 [2024-11-29 13:10:03.939547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:01.541 [2024-11-29 13:10:03.939555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.939563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.939571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:01.541 [2024-11-29 13:10:03.939901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.541 [2024-11-29 13:10:03.939916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c05610 with addr=10.0.0.2, port=4420 00:26:01.541 [2024-11-29 13:10:03.939926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c05610 is same with the state(6) to be set 00:26:01.541 [2024-11-29 13:10:03.940279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.541 [2024-11-29 13:10:03.940289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2145810 with addr=10.0.0.2, port=4420 00:26:01.541 [2024-11-29 13:10:03.940297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2145810 is same with the state(6) to be set 00:26:01.541 [2024-11-29 13:10:03.940304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:01.541 [2024-11-29 13:10:03.940310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.940317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.940324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:01.541 [2024-11-29 13:10:03.940331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:01.541 [2024-11-29 13:10:03.940338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.940344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.940351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:01.541 [2024-11-29 13:10:03.940380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c05610 (9): Bad file descriptor 00:26:01.541 [2024-11-29 13:10:03.940390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2145810 (9): Bad file descriptor 00:26:01.541 [2024-11-29 13:10:03.940416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:01.541 [2024-11-29 13:10:03.940423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.940430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.940436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:01.541 [2024-11-29 13:10:03.940443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:01.541 [2024-11-29 13:10:03.940449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:01.541 [2024-11-29 13:10:03.940457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:01.541 [2024-11-29 13:10:03.940463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:01.541 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 993415 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 993415 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 993415 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.486 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:02.487 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.487 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.487 rmmod nvme_tcp 00:26:02.487 rmmod nvme_fabrics 00:26:02.747 rmmod nvme_keyring 00:26:02.747 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.747 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:02.747 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 993168 ']' 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 993168 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 993168 ']' 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 993168 00:26:02.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (993168) - No such process 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 993168 is not found' 00:26:02.748 Process with pid 993168 is not found 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.748 13:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.661 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.661 00:26:04.661 real 0m7.692s 00:26:04.661 user 0m18.523s 00:26:04.661 sys 0m1.228s 00:26:04.661 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.661 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:04.661 ************************************ 00:26:04.661 END TEST nvmf_shutdown_tc3 00:26:04.661 ************************************ 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:04.923 ************************************ 00:26:04.923 START TEST nvmf_shutdown_tc4 00:26:04.923 ************************************ 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:04.923 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:04.924 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:04.924 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:04.924 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:04.924 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:04.924 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:05.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:26:05.186 00:26:05.186 --- 10.0.0.2 ping statistics --- 00:26:05.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.186 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:26:05.186 00:26:05.186 --- 10.0.0.1 ping statistics --- 00:26:05.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.186 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=994732 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 994732 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 994732 ']' 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.186 13:10:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:05.186 [2024-11-29 13:10:07.823386] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:26:05.186 [2024-11-29 13:10:07.823451] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.448 [2024-11-29 13:10:07.917691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:05.448 [2024-11-29 13:10:07.951678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.448 [2024-11-29 13:10:07.951710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.448 [2024-11-29 13:10:07.951715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.448 [2024-11-29 13:10:07.951720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.448 [2024-11-29 13:10:07.951724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.448 [2024-11-29 13:10:07.953047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.448 [2024-11-29 13:10:07.953238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.448 [2024-11-29 13:10:07.953567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.448 [2024-11-29 13:10:07.953568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.018 [2024-11-29 13:10:08.669224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:06.018 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.019 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.019 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.019 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.019 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.019 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.280 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.280 Malloc1 00:26:06.280 [2024-11-29 13:10:08.776023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.280 Malloc2 00:26:06.280 Malloc3 00:26:06.280 Malloc4 00:26:06.280 Malloc5 00:26:06.280 Malloc6 00:26:06.540 Malloc7 00:26:06.540 Malloc8 00:26:06.540 Malloc9 00:26:06.540 Malloc10 00:26:06.540 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.540 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:06.540 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.540 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:06.540 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=995113 00:26:06.540 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:06.540 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:26:06.801 [2024-11-29 13:10:09.252625] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 994732 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 994732 ']' 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 994732 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 994732 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 994732' 00:26:12.098 killing process with pid 994732 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 994732 00:26:12.098 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 994732 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 [2024-11-29 13:10:14.249962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75010 is same with the state(6) to be set 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 [2024-11-29 13:10:14.250002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75010 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75010 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75010 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e75010 is same with Write completed with error (sct=0, sc=8) 00:26:12.098 the state(6) to be set 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 [2024-11-29 13:10:14.250216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e754e0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e754e0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e754e0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e754e0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.098 starting I/O failed: -6 00:26:12.098 starting I/O failed: -6 00:26:12.098 [2024-11-29 13:10:14.250489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e759b0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e759b0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e759b0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e759b0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e759b0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e759b0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e759b0 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e759b0 is same with the state(6) to be set 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 [2024-11-29 13:10:14.250751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74b40 is same with starting I/O failed: -6 00:26:12.098 the state(6) to be set 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 [2024-11-29 13:10:14.250773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74b40 is same with the state(6) to be set 00:26:12.098 starting I/O failed: -6 00:26:12.098 [2024-11-29 13:10:14.250779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74b40 is same with the state(6) to be set 00:26:12.098 [2024-11-29 13:10:14.250785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74b40 is same with the state(6) to be set 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 [2024-11-29 13:10:14.250790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e74b40 is same with the state(6) to be set 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 [2024-11-29 13:10:14.251268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.098 Write completed with error (sct=0, sc=8) 00:26:12.098 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 [2024-11-29 13:10:14.252176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 [2024-11-29 13:10:14.253780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.099 NVMe io qpair process completion error 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 starting I/O failed: -6 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.099 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 [2024-11-29 13:10:14.255014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 [2024-11-29 13:10:14.255846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 [2024-11-29 13:10:14.256765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.100 starting I/O failed: -6 00:26:12.100 starting I/O failed: -6 00:26:12.100 starting I/O failed: -6 00:26:12.100 starting I/O failed: -6 00:26:12.100 starting I/O failed: -6 00:26:12.100 starting I/O failed: -6 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.100 Write completed with error (sct=0, sc=8) 00:26:12.100 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 [2024-11-29 13:10:14.258551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.101 NVMe io qpair process completion error 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 [2024-11-29 13:10:14.259727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.101 starting I/O failed: -6 00:26:12.101 starting I/O failed: -6 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 [2024-11-29 13:10:14.260686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.101 starting I/O failed: -6 00:26:12.101 starting I/O failed: -6 00:26:12.101 starting I/O failed: -6 00:26:12.101 starting I/O failed: -6 00:26:12.101 starting I/O failed: -6 00:26:12.101 starting I/O failed: -6 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.101 Write completed with error (sct=0, sc=8) 00:26:12.101 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 [2024-11-29 13:10:14.261853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 [2024-11-29 13:10:14.263321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.102 NVMe io qpair process completion error 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.102 starting I/O failed: -6 00:26:12.102 Write completed with error (sct=0, sc=8) 00:26:12.103 [2024-11-29 13:10:14.264443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 [2024-11-29 13:10:14.265255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 [2024-11-29 13:10:14.266203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.103 starting I/O failed: -6 00:26:12.103 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 [2024-11-29 13:10:14.268523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.104 NVMe io qpair process completion error 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 [2024-11-29 13:10:14.269745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.104 starting I/O failed: -6 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 [2024-11-29 13:10:14.270619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.104 starting I/O failed: -6 00:26:12.104 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 [2024-11-29 13:10:14.271565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 [2024-11-29 13:10:14.273239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.105 NVMe io qpair process completion error 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.105 [2024-11-29 13:10:14.274547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 Write completed with error (sct=0, sc=8) 00:26:12.105 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 [2024-11-29 13:10:14.275360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 [2024-11-29 13:10:14.276277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.106 Write completed with error (sct=0, sc=8) 00:26:12.106 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 [2024-11-29 13:10:14.279059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.107 NVMe io qpair process completion error 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 [2024-11-29 13:10:14.280066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 [2024-11-29 13:10:14.280882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.107 Write completed with error (sct=0, sc=8) 00:26:12.107 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 [2024-11-29 13:10:14.281808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 [2024-11-29 13:10:14.283447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.108 NVMe io qpair process completion error 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 [2024-11-29 13:10:14.284493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.108 starting I/O failed: -6 00:26:12.108 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 [2024-11-29 13:10:14.285320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 [2024-11-29 13:10:14.286273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.109 Write completed with error (sct=0, sc=8) 00:26:12.109 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 [2024-11-29 13:10:14.287755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.110 NVMe io qpair process completion error 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 [2024-11-29 13:10:14.289015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 [2024-11-29 13:10:14.289825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.110 starting I/O failed: -6 00:26:12.110 starting I/O failed: -6 00:26:12.110 starting I/O failed: -6 00:26:12.110 starting I/O failed: -6 00:26:12.110 starting I/O failed: -6 00:26:12.110 starting I/O failed: -6 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.110 starting I/O failed: -6 00:26:12.110 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 [2024-11-29 13:10:14.291171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 [2024-11-29 13:10:14.293815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.111 NVMe io qpair process completion error 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 [2024-11-29 13:10:14.294881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 starting I/O failed: -6 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.111 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 [2024-11-29 13:10:14.295705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 [2024-11-29 13:10:14.296667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.112 starting I/O failed: -6 00:26:12.112 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 Write completed with error (sct=0, sc=8) 00:26:12.113 starting I/O failed: -6 00:26:12.113 [2024-11-29 13:10:14.298528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:12.113 NVMe io qpair process completion error 00:26:12.113 Initializing NVMe Controllers 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.113 Controller IO queue size 128, less than required. 00:26:12.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:12.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.113 Initialization complete. Launching workers. 00:26:12.113 ======================================================== 00:26:12.113 Latency(us) 00:26:12.113 Device Information : IOPS MiB/s Average min max 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1912.22 82.17 66958.56 617.49 119425.08 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1889.48 81.19 67788.47 634.40 150061.16 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1901.48 81.70 67380.12 536.27 119536.83 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1902.12 81.73 67389.21 633.13 119310.67 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1915.38 82.30 66945.93 829.50 127643.49 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1931.18 82.98 66432.28 666.64 119390.72 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1909.91 82.07 67191.64 593.59 118319.96 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1876.00 80.61 68432.42 616.68 120679.65 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1847.36 79.38 69527.97 679.64 119824.97 00:26:12.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1874.53 80.55 67838.26 650.03 119446.48 00:26:12.113 ======================================================== 00:26:12.113 Total : 18959.66 814.67 67578.51 536.27 150061.16 00:26:12.113 00:26:12.113 [2024-11-29 13:10:14.302009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2560 is same with the state(6) to be set 00:26:12.113 [2024-11-29 13:10:14.302053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2890 is same with the state(6) to be set 00:26:12.113 [2024-11-29 13:10:14.302083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2ef0 is same with the state(6) to be set 00:26:12.113 [2024-11-29 13:10:14.302110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad3410 is same with the state(6) to be set 00:26:12.113 [2024-11-29 13:10:14.302139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad2bc0 is same with the state(6) to be set 00:26:12.113 [2024-11-29 13:10:14.302173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad4900 is same with the state(6) to be set 00:26:12.113 [2024-11-29 13:10:14.302202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad3740 is same with the state(6) to be set 00:26:12.113 [2024-11-29 13:10:14.302230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad4ae0 is same with the state(6) to be set 00:26:12.113 [2024-11-29 13:10:14.302259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad3a70 is same with the state(6) to be set 00:26:12.113 [2024-11-29 13:10:14.302287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad4720 is same with the state(6) to be set 00:26:12.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:12.113 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:13.054 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 995113 00:26:13.054 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:26:13.054 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 995113 00:26:13.054 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:13.054 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 995113 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:13.055 rmmod nvme_tcp 00:26:13.055 rmmod nvme_fabrics 00:26:13.055 rmmod nvme_keyring 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 994732 ']' 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 994732 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 994732 ']' 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 994732 00:26:13.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (994732) - No such process 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 994732 is not found' 00:26:13.055 Process with pid 994732 is not found 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.055 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:15.600 00:26:15.600 real 0m10.281s 00:26:15.600 user 0m28.102s 00:26:15.600 sys 0m3.896s 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:15.600 ************************************ 00:26:15.600 END TEST nvmf_shutdown_tc4 00:26:15.600 ************************************ 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:15.600 00:26:15.600 real 0m43.227s 00:26:15.600 user 1m44.214s 00:26:15.600 sys 0m13.824s 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:15.600 ************************************ 00:26:15.600 END TEST nvmf_shutdown 00:26:15.600 ************************************ 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:15.600 ************************************ 00:26:15.600 START TEST nvmf_nsid 00:26:15.600 ************************************ 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:15.600 * Looking for test storage... 00:26:15.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.600 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:15.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.600 --rc genhtml_branch_coverage=1 00:26:15.600 --rc genhtml_function_coverage=1 00:26:15.600 --rc genhtml_legend=1 00:26:15.600 --rc geninfo_all_blocks=1 00:26:15.600 --rc geninfo_unexecuted_blocks=1 00:26:15.600 00:26:15.600 ' 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:15.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.600 --rc genhtml_branch_coverage=1 00:26:15.600 --rc genhtml_function_coverage=1 00:26:15.600 --rc genhtml_legend=1 00:26:15.600 --rc geninfo_all_blocks=1 00:26:15.600 --rc geninfo_unexecuted_blocks=1 00:26:15.600 00:26:15.600 ' 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:15.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.600 --rc genhtml_branch_coverage=1 00:26:15.600 --rc genhtml_function_coverage=1 00:26:15.600 --rc genhtml_legend=1 00:26:15.600 --rc geninfo_all_blocks=1 00:26:15.600 --rc geninfo_unexecuted_blocks=1 00:26:15.600 00:26:15.600 ' 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:15.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.600 --rc genhtml_branch_coverage=1 00:26:15.600 --rc genhtml_function_coverage=1 00:26:15.600 --rc genhtml_legend=1 00:26:15.600 --rc geninfo_all_blocks=1 00:26:15.600 --rc geninfo_unexecuted_blocks=1 00:26:15.600 00:26:15.600 ' 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.600 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.601 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:23.738 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:23.739 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:23.739 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:23.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:23.739 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:23.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:26:23.739 00:26:23.739 --- 10.0.0.2 ping statistics --- 00:26:23.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.739 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:26:23.739 00:26:23.739 --- 10.0.0.1 ping statistics --- 00:26:23.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.739 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1000465 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1000465 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1000465 ']' 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.739 13:10:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.739 [2024-11-29 13:10:25.638054] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:26:23.739 [2024-11-29 13:10:25.638124] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.739 [2024-11-29 13:10:25.738565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.739 [2024-11-29 13:10:25.788972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.739 [2024-11-29 13:10:25.789025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.739 [2024-11-29 13:10:25.789033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.739 [2024-11-29 13:10:25.789040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.739 [2024-11-29 13:10:25.789047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.739 [2024-11-29 13:10:25.789794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.999 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.999 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:23.999 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:23.999 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:23.999 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:23.999 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.999 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:23.999 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1000738 00:26:23.999 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=05ba0e95-becb-4a1e-bb41-f5959a20c44c 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=2abd5e41-c85f-4ce9-8f2b-6e661375b967 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=5a6aabbb-a8c8-45aa-aaad-0e107d795597 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:24.000 null0 00:26:24.000 null1 00:26:24.000 null2 00:26:24.000 [2024-11-29 13:10:26.546623] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:26:24.000 [2024-11-29 13:10:26.546693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000738 ] 00:26:24.000 [2024-11-29 13:10:26.550113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.000 [2024-11-29 13:10:26.574409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1000738 /var/tmp/tgt2.sock 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1000738 ']' 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:24.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.000 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:24.000 [2024-11-29 13:10:26.638314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.259 [2024-11-29 13:10:26.691896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.518 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.518 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:24.518 13:10:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:24.778 [2024-11-29 13:10:27.258859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.778 [2024-11-29 13:10:27.275030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:26:24.778 nvme0n1 nvme0n2 00:26:24.778 nvme1n1 00:26:24.778 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:24.778 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:24.778 13:10:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:26:26.162 13:10:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:26:27.108 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:27.108 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 05ba0e95-becb-4a1e-bb41-f5959a20c44c 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=05ba0e95becb4a1ebb41f5959a20c44c 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 05BA0E95BECB4A1EBB41F5959A20C44C 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 05BA0E95BECB4A1EBB41F5959A20C44C == \0\5\B\A\0\E\9\5\B\E\C\B\4\A\1\E\B\B\4\1\F\5\9\5\9\A\2\0\C\4\4\C ]] 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 2abd5e41-c85f-4ce9-8f2b-6e661375b967 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2abd5e41c85f4ce98f2b6e661375b967 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2ABD5E41C85F4CE98F2B6E661375B967 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 2ABD5E41C85F4CE98F2B6E661375B967 == \2\A\B\D\5\E\4\1\C\8\5\F\4\C\E\9\8\F\2\B\6\E\6\6\1\3\7\5\B\9\6\7 ]] 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:27.368 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 5a6aabbb-a8c8-45aa-aaad-0e107d795597 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:27.369 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:27.369 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5a6aabbba8c845aaaaad0e107d795597 00:26:27.369 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5A6AABBBA8C845AAAAAD0E107D795597 00:26:27.369 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 5A6AABBBA8C845AAAAAD0E107D795597 == \5\A\6\A\A\B\B\B\A\8\C\8\4\5\A\A\A\A\A\D\0\E\1\0\7\D\7\9\5\5\9\7 ]] 00:26:27.369 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:27.628 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:27.628 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:27.628 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1000738 00:26:27.628 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1000738 ']' 00:26:27.628 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1000738 00:26:27.628 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:27.628 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.628 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1000738 00:26:27.629 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.629 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.629 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1000738' 00:26:27.629 killing process with pid 1000738 00:26:27.629 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1000738 00:26:27.629 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1000738 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.889 rmmod nvme_tcp 00:26:27.889 rmmod nvme_fabrics 00:26:27.889 rmmod nvme_keyring 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1000465 ']' 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1000465 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1000465 ']' 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1000465 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.889 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1000465 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1000465' 00:26:28.149 killing process with pid 1000465 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1000465 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1000465 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.149 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.798 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:30.798 00:26:30.798 real 0m14.988s 00:26:30.798 user 0m11.443s 00:26:30.798 sys 0m6.901s 00:26:30.798 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.798 13:10:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:30.798 ************************************ 00:26:30.798 END TEST nvmf_nsid 00:26:30.798 ************************************ 00:26:30.798 13:10:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:30.798 00:26:30.798 real 13m3.305s 00:26:30.798 user 27m15.191s 00:26:30.798 sys 3m56.358s 00:26:30.798 13:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.798 13:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:30.798 ************************************ 00:26:30.798 END TEST nvmf_target_extra 00:26:30.798 ************************************ 00:26:30.798 13:10:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:30.798 13:10:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:30.798 13:10:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.798 13:10:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:30.798 ************************************ 00:26:30.798 START TEST nvmf_host 00:26:30.798 ************************************ 00:26:30.798 13:10:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:30.798 * Looking for test storage... 00:26:30.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:30.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.798 --rc genhtml_branch_coverage=1 00:26:30.798 --rc genhtml_function_coverage=1 00:26:30.798 --rc genhtml_legend=1 00:26:30.798 --rc geninfo_all_blocks=1 00:26:30.798 --rc geninfo_unexecuted_blocks=1 00:26:30.798 00:26:30.798 ' 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:30.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.798 --rc genhtml_branch_coverage=1 00:26:30.798 --rc genhtml_function_coverage=1 00:26:30.798 --rc genhtml_legend=1 00:26:30.798 --rc geninfo_all_blocks=1 00:26:30.798 --rc geninfo_unexecuted_blocks=1 00:26:30.798 00:26:30.798 ' 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:30.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.798 --rc genhtml_branch_coverage=1 00:26:30.798 --rc genhtml_function_coverage=1 00:26:30.798 --rc genhtml_legend=1 00:26:30.798 --rc geninfo_all_blocks=1 00:26:30.798 --rc geninfo_unexecuted_blocks=1 00:26:30.798 00:26:30.798 ' 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:30.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.798 --rc genhtml_branch_coverage=1 00:26:30.798 --rc genhtml_function_coverage=1 00:26:30.798 --rc genhtml_legend=1 00:26:30.798 --rc geninfo_all_blocks=1 00:26:30.798 --rc geninfo_unexecuted_blocks=1 00:26:30.798 00:26:30.798 ' 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.798 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:30.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.799 ************************************ 00:26:30.799 START TEST nvmf_multicontroller 00:26:30.799 ************************************ 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:30.799 * Looking for test storage... 00:26:30.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.799 --rc genhtml_branch_coverage=1 00:26:30.799 --rc genhtml_function_coverage=1 00:26:30.799 --rc genhtml_legend=1 00:26:30.799 --rc geninfo_all_blocks=1 00:26:30.799 --rc geninfo_unexecuted_blocks=1 00:26:30.799 00:26:30.799 ' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.799 --rc genhtml_branch_coverage=1 00:26:30.799 --rc genhtml_function_coverage=1 00:26:30.799 --rc genhtml_legend=1 00:26:30.799 --rc geninfo_all_blocks=1 00:26:30.799 --rc geninfo_unexecuted_blocks=1 00:26:30.799 00:26:30.799 ' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.799 --rc genhtml_branch_coverage=1 00:26:30.799 --rc genhtml_function_coverage=1 00:26:30.799 --rc genhtml_legend=1 00:26:30.799 --rc geninfo_all_blocks=1 00:26:30.799 --rc geninfo_unexecuted_blocks=1 00:26:30.799 00:26:30.799 ' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.799 --rc genhtml_branch_coverage=1 00:26:30.799 --rc genhtml_function_coverage=1 00:26:30.799 --rc genhtml_legend=1 00:26:30.799 --rc geninfo_all_blocks=1 00:26:30.799 --rc geninfo_unexecuted_blocks=1 00:26:30.799 00:26:30.799 ' 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.799 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:30.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:26:30.800 13:10:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:39.010 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:39.010 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:39.010 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:39.010 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:26:39.010 00:26:39.010 --- 10.0.0.2 ping statistics --- 00:26:39.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.010 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:26:39.010 00:26:39.010 --- 10.0.0.1 ping statistics --- 00:26:39.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.010 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1005856 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1005856 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1005856 ']' 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.010 13:10:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.010 [2024-11-29 13:10:41.001314] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:26:39.010 [2024-11-29 13:10:41.001379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.010 [2024-11-29 13:10:41.100306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:39.010 [2024-11-29 13:10:41.152205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.010 [2024-11-29 13:10:41.152256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.010 [2024-11-29 13:10:41.152264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.010 [2024-11-29 13:10:41.152271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.010 [2024-11-29 13:10:41.152277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.010 [2024-11-29 13:10:41.154130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.010 [2024-11-29 13:10:41.154209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.010 [2024-11-29 13:10:41.154246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.270 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.271 [2024-11-29 13:10:41.888447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.271 Malloc0 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.271 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.531 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.531 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.532 [2024-11-29 13:10:41.960474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.532 [2024-11-29 13:10:41.972370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.532 13:10:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.532 Malloc1 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1005955 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1005955 /var/tmp/bdevperf.sock 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1005955 ']' 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:39.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.532 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.474 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.474 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:40.474 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:40.474 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.474 13:10:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.474 NVMe0n1 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.474 1 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:40.474 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.475 request: 00:26:40.475 { 00:26:40.475 "name": "NVMe0", 00:26:40.475 "trtype": "tcp", 00:26:40.475 "traddr": "10.0.0.2", 00:26:40.475 "adrfam": "ipv4", 00:26:40.475 "trsvcid": "4420", 00:26:40.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.475 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:40.475 "hostaddr": "10.0.0.1", 00:26:40.475 "prchk_reftag": false, 00:26:40.475 "prchk_guard": false, 00:26:40.475 "hdgst": false, 00:26:40.475 "ddgst": false, 00:26:40.475 "allow_unrecognized_csi": false, 00:26:40.475 "method": "bdev_nvme_attach_controller", 00:26:40.475 "req_id": 1 00:26:40.475 } 00:26:40.475 Got JSON-RPC error response 00:26:40.475 response: 00:26:40.475 { 00:26:40.475 "code": -114, 00:26:40.475 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:40.475 } 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.475 request: 00:26:40.475 { 00:26:40.475 "name": "NVMe0", 00:26:40.475 "trtype": "tcp", 00:26:40.475 "traddr": "10.0.0.2", 00:26:40.475 "adrfam": "ipv4", 00:26:40.475 "trsvcid": "4420", 00:26:40.475 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:40.475 "hostaddr": "10.0.0.1", 00:26:40.475 "prchk_reftag": false, 00:26:40.475 "prchk_guard": false, 00:26:40.475 "hdgst": false, 00:26:40.475 "ddgst": false, 00:26:40.475 "allow_unrecognized_csi": false, 00:26:40.475 "method": "bdev_nvme_attach_controller", 00:26:40.475 "req_id": 1 00:26:40.475 } 00:26:40.475 Got JSON-RPC error response 00:26:40.475 response: 00:26:40.475 { 00:26:40.475 "code": -114, 00:26:40.475 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:40.475 } 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.475 request: 00:26:40.475 { 00:26:40.475 "name": "NVMe0", 00:26:40.475 "trtype": "tcp", 00:26:40.475 "traddr": "10.0.0.2", 00:26:40.475 "adrfam": "ipv4", 00:26:40.475 "trsvcid": "4420", 00:26:40.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.475 "hostaddr": "10.0.0.1", 00:26:40.475 "prchk_reftag": false, 00:26:40.475 "prchk_guard": false, 00:26:40.475 "hdgst": false, 00:26:40.475 "ddgst": false, 00:26:40.475 "multipath": "disable", 00:26:40.475 "allow_unrecognized_csi": false, 00:26:40.475 "method": "bdev_nvme_attach_controller", 00:26:40.475 "req_id": 1 00:26:40.475 } 00:26:40.475 Got JSON-RPC error response 00:26:40.475 response: 00:26:40.475 { 00:26:40.475 "code": -114, 00:26:40.475 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:26:40.475 } 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:40.475 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.735 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.735 request: 00:26:40.735 { 00:26:40.735 "name": "NVMe0", 00:26:40.735 "trtype": "tcp", 00:26:40.735 "traddr": "10.0.0.2", 00:26:40.735 "adrfam": "ipv4", 00:26:40.735 "trsvcid": "4420", 00:26:40.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.735 "hostaddr": "10.0.0.1", 00:26:40.735 "prchk_reftag": false, 00:26:40.735 "prchk_guard": false, 00:26:40.735 "hdgst": false, 00:26:40.735 "ddgst": false, 00:26:40.735 "multipath": "failover", 00:26:40.735 "allow_unrecognized_csi": false, 00:26:40.735 "method": "bdev_nvme_attach_controller", 00:26:40.735 "req_id": 1 00:26:40.735 } 00:26:40.735 Got JSON-RPC error response 00:26:40.735 response: 00:26:40.735 { 00:26:40.735 "code": -114, 00:26:40.735 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:40.735 } 00:26:40.735 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:40.735 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:40.735 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:40.735 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:40.735 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.736 NVMe0n1 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.736 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:40.736 13:10:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:42.118 { 00:26:42.118 "results": [ 00:26:42.118 { 00:26:42.118 "job": "NVMe0n1", 00:26:42.118 "core_mask": "0x1", 00:26:42.118 "workload": "write", 00:26:42.118 "status": "finished", 00:26:42.118 "queue_depth": 128, 00:26:42.118 "io_size": 4096, 00:26:42.118 "runtime": 1.00818, 00:26:42.118 "iops": 19287.23045487909, 00:26:42.118 "mibps": 75.34074396437144, 00:26:42.118 "io_failed": 0, 00:26:42.118 "io_timeout": 0, 00:26:42.118 "avg_latency_us": 6610.949266135253, 00:26:42.118 "min_latency_us": 2880.8533333333335, 00:26:42.118 "max_latency_us": 9175.04 00:26:42.118 } 00:26:42.118 ], 00:26:42.118 "core_count": 1 00:26:42.118 } 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1005955 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1005955 ']' 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1005955 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1005955 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1005955' 00:26:42.118 killing process with pid 1005955 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1005955 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1005955 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:26:42.118 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:42.118 [2024-11-29 13:10:42.112528] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:26:42.118 [2024-11-29 13:10:42.112606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005955 ] 00:26:42.118 [2024-11-29 13:10:42.205491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.118 [2024-11-29 13:10:42.258449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.118 [2024-11-29 13:10:43.355869] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name eac284a7-d713-4b34-a79a-1a69c23e2bd3 already exists 00:26:42.118 [2024-11-29 13:10:43.355905] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:eac284a7-d713-4b34-a79a-1a69c23e2bd3 alias for bdev NVMe1n1 00:26:42.118 [2024-11-29 13:10:43.355915] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:42.118 Running I/O for 1 seconds... 00:26:42.118 19285.00 IOPS, 75.33 MiB/s 00:26:42.118 Latency(us) 00:26:42.118 [2024-11-29T12:10:44.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.118 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:42.118 NVMe0n1 : 1.01 19287.23 75.34 0.00 0.00 6610.95 2880.85 9175.04 00:26:42.118 [2024-11-29T12:10:44.798Z] =================================================================================================================== 00:26:42.118 [2024-11-29T12:10:44.798Z] Total : 19287.23 75.34 0.00 0.00 6610.95 2880.85 9175.04 00:26:42.118 Received shutdown signal, test time was about 1.000000 seconds 00:26:42.118 00:26:42.118 Latency(us) 00:26:42.118 [2024-11-29T12:10:44.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.118 [2024-11-29T12:10:44.798Z] =================================================================================================================== 00:26:42.118 [2024-11-29T12:10:44.798Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.118 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.118 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.118 rmmod nvme_tcp 00:26:42.118 rmmod nvme_fabrics 00:26:42.118 rmmod nvme_keyring 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1005856 ']' 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1005856 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1005856 ']' 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1005856 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1005856 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1005856' 00:26:42.379 killing process with pid 1005856 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1005856 00:26:42.379 13:10:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1005856 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.379 13:10:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.924 00:26:44.924 real 0m13.897s 00:26:44.924 user 0m16.688s 00:26:44.924 sys 0m6.550s 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:44.924 ************************************ 00:26:44.924 END TEST nvmf_multicontroller 00:26:44.924 ************************************ 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.924 ************************************ 00:26:44.924 START TEST nvmf_aer 00:26:44.924 ************************************ 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:44.924 * Looking for test storage... 00:26:44.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.924 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:44.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.925 --rc genhtml_branch_coverage=1 00:26:44.925 --rc genhtml_function_coverage=1 00:26:44.925 --rc genhtml_legend=1 00:26:44.925 --rc geninfo_all_blocks=1 00:26:44.925 --rc geninfo_unexecuted_blocks=1 00:26:44.925 00:26:44.925 ' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:44.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.925 --rc genhtml_branch_coverage=1 00:26:44.925 --rc genhtml_function_coverage=1 00:26:44.925 --rc genhtml_legend=1 00:26:44.925 --rc geninfo_all_blocks=1 00:26:44.925 --rc geninfo_unexecuted_blocks=1 00:26:44.925 00:26:44.925 ' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:44.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.925 --rc genhtml_branch_coverage=1 00:26:44.925 --rc genhtml_function_coverage=1 00:26:44.925 --rc genhtml_legend=1 00:26:44.925 --rc geninfo_all_blocks=1 00:26:44.925 --rc geninfo_unexecuted_blocks=1 00:26:44.925 00:26:44.925 ' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:44.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.925 --rc genhtml_branch_coverage=1 00:26:44.925 --rc genhtml_function_coverage=1 00:26:44.925 --rc genhtml_legend=1 00:26:44.925 --rc geninfo_all_blocks=1 00:26:44.925 --rc geninfo_unexecuted_blocks=1 00:26:44.925 00:26:44.925 ' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:44.925 13:10:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.066 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.066 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.066 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.066 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.066 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.066 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.066 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.066 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:53.067 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:53.067 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:53.067 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:53.067 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:26:53.067 00:26:53.067 --- 10.0.0.2 ping statistics --- 00:26:53.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.067 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:26:53.067 00:26:53.067 --- 10.0.0.1 ping statistics --- 00:26:53.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.067 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:53.067 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1010717 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1010717 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1010717 ']' 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.068 13:10:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.068 [2024-11-29 13:10:55.013654] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:26:53.068 [2024-11-29 13:10:55.013724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.068 [2024-11-29 13:10:55.114919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.068 [2024-11-29 13:10:55.168946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.068 [2024-11-29 13:10:55.169002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.068 [2024-11-29 13:10:55.169011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.068 [2024-11-29 13:10:55.169018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.068 [2024-11-29 13:10:55.169024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.068 [2024-11-29 13:10:55.171478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.068 [2024-11-29 13:10:55.171640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.068 [2024-11-29 13:10:55.171802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.068 [2024-11-29 13:10:55.171802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.330 [2024-11-29 13:10:55.893444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.330 Malloc0 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.330 [2024-11-29 13:10:55.970028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.330 [ 00:26:53.330 { 00:26:53.330 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:53.330 "subtype": "Discovery", 00:26:53.330 "listen_addresses": [], 00:26:53.330 "allow_any_host": true, 00:26:53.330 "hosts": [] 00:26:53.330 }, 00:26:53.330 { 00:26:53.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.330 "subtype": "NVMe", 00:26:53.330 "listen_addresses": [ 00:26:53.330 { 00:26:53.330 "trtype": "TCP", 00:26:53.330 "adrfam": "IPv4", 00:26:53.330 "traddr": "10.0.0.2", 00:26:53.330 "trsvcid": "4420" 00:26:53.330 } 00:26:53.330 ], 00:26:53.330 "allow_any_host": true, 00:26:53.330 "hosts": [], 00:26:53.330 "serial_number": "SPDK00000000000001", 00:26:53.330 "model_number": "SPDK bdev Controller", 00:26:53.330 "max_namespaces": 2, 00:26:53.330 "min_cntlid": 1, 00:26:53.330 "max_cntlid": 65519, 00:26:53.330 "namespaces": [ 00:26:53.330 { 00:26:53.330 "nsid": 1, 00:26:53.330 "bdev_name": "Malloc0", 00:26:53.330 "name": "Malloc0", 00:26:53.330 "nguid": "F2B5909AE7474292B77D4BF6ADF2416F", 00:26:53.330 "uuid": "f2b5909a-e747-4292-b77d-4bf6adf2416f" 00:26:53.330 } 00:26:53.330 ] 00:26:53.330 } 00:26:53.330 ] 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1011000 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:53.330 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:53.331 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:53.331 13:10:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:53.591 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:53.591 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:53.591 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:53.591 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:53.591 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:53.591 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:26:53.591 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:26:53.591 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.853 Malloc1 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.853 Asynchronous Event Request test 00:26:53.853 Attaching to 10.0.0.2 00:26:53.853 Attached to 10.0.0.2 00:26:53.853 Registering asynchronous event callbacks... 00:26:53.853 Starting namespace attribute notice tests for all controllers... 00:26:53.853 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:53.853 aer_cb - Changed Namespace 00:26:53.853 Cleaning up... 00:26:53.853 [ 00:26:53.853 { 00:26:53.853 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:53.853 "subtype": "Discovery", 00:26:53.853 "listen_addresses": [], 00:26:53.853 "allow_any_host": true, 00:26:53.853 "hosts": [] 00:26:53.853 }, 00:26:53.853 { 00:26:53.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.853 "subtype": "NVMe", 00:26:53.853 "listen_addresses": [ 00:26:53.853 { 00:26:53.853 "trtype": "TCP", 00:26:53.853 "adrfam": "IPv4", 00:26:53.853 "traddr": "10.0.0.2", 00:26:53.853 "trsvcid": "4420" 00:26:53.853 } 00:26:53.853 ], 00:26:53.853 "allow_any_host": true, 00:26:53.853 "hosts": [], 00:26:53.853 "serial_number": "SPDK00000000000001", 00:26:53.853 "model_number": "SPDK bdev Controller", 00:26:53.853 "max_namespaces": 2, 00:26:53.853 "min_cntlid": 1, 00:26:53.853 "max_cntlid": 65519, 00:26:53.853 "namespaces": [ 00:26:53.853 { 00:26:53.853 "nsid": 1, 00:26:53.853 "bdev_name": "Malloc0", 00:26:53.853 "name": "Malloc0", 00:26:53.853 "nguid": "F2B5909AE7474292B77D4BF6ADF2416F", 00:26:53.853 "uuid": "f2b5909a-e747-4292-b77d-4bf6adf2416f" 00:26:53.853 }, 00:26:53.853 { 00:26:53.853 "nsid": 2, 00:26:53.853 "bdev_name": "Malloc1", 00:26:53.853 "name": "Malloc1", 00:26:53.853 "nguid": "5160FA986E7C499C8A7EC2DA7C112AA3", 00:26:53.853 "uuid": "5160fa98-6e7c-499c-8a7e-c2da7c112aa3" 00:26:53.853 } 00:26:53.853 ] 00:26:53.853 } 00:26:53.853 ] 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1011000 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.853 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.854 rmmod nvme_tcp 00:26:53.854 rmmod nvme_fabrics 00:26:53.854 rmmod nvme_keyring 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1010717 ']' 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1010717 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1010717 ']' 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1010717 00:26:53.854 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010717 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010717' 00:26:54.115 killing process with pid 1010717 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1010717 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1010717 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.115 13:10:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.659 13:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.659 00:26:56.659 real 0m11.675s 00:26:56.659 user 0m8.599s 00:26:56.659 sys 0m6.209s 00:26:56.659 13:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.659 13:10:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:56.659 ************************************ 00:26:56.659 END TEST nvmf_aer 00:26:56.659 ************************************ 00:26:56.659 13:10:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:56.659 13:10:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:56.659 13:10:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.659 13:10:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.659 ************************************ 00:26:56.659 START TEST nvmf_async_init 00:26:56.659 ************************************ 00:26:56.659 13:10:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:56.659 * Looking for test storage... 00:26:56.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.659 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.660 --rc genhtml_branch_coverage=1 00:26:56.660 --rc genhtml_function_coverage=1 00:26:56.660 --rc genhtml_legend=1 00:26:56.660 --rc geninfo_all_blocks=1 00:26:56.660 --rc geninfo_unexecuted_blocks=1 00:26:56.660 00:26:56.660 ' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.660 --rc genhtml_branch_coverage=1 00:26:56.660 --rc genhtml_function_coverage=1 00:26:56.660 --rc genhtml_legend=1 00:26:56.660 --rc geninfo_all_blocks=1 00:26:56.660 --rc geninfo_unexecuted_blocks=1 00:26:56.660 00:26:56.660 ' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.660 --rc genhtml_branch_coverage=1 00:26:56.660 --rc genhtml_function_coverage=1 00:26:56.660 --rc genhtml_legend=1 00:26:56.660 --rc geninfo_all_blocks=1 00:26:56.660 --rc geninfo_unexecuted_blocks=1 00:26:56.660 00:26:56.660 ' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.660 --rc genhtml_branch_coverage=1 00:26:56.660 --rc genhtml_function_coverage=1 00:26:56.660 --rc genhtml_legend=1 00:26:56.660 --rc geninfo_all_blocks=1 00:26:56.660 --rc geninfo_unexecuted_blocks=1 00:26:56.660 00:26:56.660 ' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=874f848b592d43629e798a135efc39c7 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.660 13:10:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:04.800 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:04.800 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.800 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:04.801 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:04.801 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:04.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:27:04.801 00:27:04.801 --- 10.0.0.2 ping statistics --- 00:27:04.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.801 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:27:04.801 00:27:04.801 --- 10.0.0.1 ping statistics --- 00:27:04.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.801 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1015329 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1015329 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1015329 ']' 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.801 13:11:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:04.801 [2024-11-29 13:11:06.828441] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:27:04.801 [2024-11-29 13:11:06.828511] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.801 [2024-11-29 13:11:06.929852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.801 [2024-11-29 13:11:06.981598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.801 [2024-11-29 13:11:06.981648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.801 [2024-11-29 13:11:06.981657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.801 [2024-11-29 13:11:06.981664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.801 [2024-11-29 13:11:06.981670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.801 [2024-11-29 13:11:06.982430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.062 [2024-11-29 13:11:07.685921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.062 null0 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 874f848b592d43629e798a135efc39c7 00:27:05.062 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.063 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.063 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.063 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:05.063 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.063 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.063 [2024-11-29 13:11:07.726228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.063 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.063 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:05.063 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.063 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.323 nvme0n1 00:27:05.323 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.323 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:05.323 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.323 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.323 [ 00:27:05.323 { 00:27:05.323 "name": "nvme0n1", 00:27:05.323 "aliases": [ 00:27:05.323 "874f848b-592d-4362-9e79-8a135efc39c7" 00:27:05.323 ], 00:27:05.323 "product_name": "NVMe disk", 00:27:05.323 "block_size": 512, 00:27:05.323 "num_blocks": 2097152, 00:27:05.323 "uuid": "874f848b-592d-4362-9e79-8a135efc39c7", 00:27:05.323 "numa_id": 0, 00:27:05.323 "assigned_rate_limits": { 00:27:05.323 "rw_ios_per_sec": 0, 00:27:05.323 "rw_mbytes_per_sec": 0, 00:27:05.323 "r_mbytes_per_sec": 0, 00:27:05.323 "w_mbytes_per_sec": 0 00:27:05.323 }, 00:27:05.323 "claimed": false, 00:27:05.323 "zoned": false, 00:27:05.323 "supported_io_types": { 00:27:05.323 "read": true, 00:27:05.323 "write": true, 00:27:05.323 "unmap": false, 00:27:05.323 "flush": true, 00:27:05.323 "reset": true, 00:27:05.323 "nvme_admin": true, 00:27:05.323 "nvme_io": true, 00:27:05.323 "nvme_io_md": false, 00:27:05.323 "write_zeroes": true, 00:27:05.323 "zcopy": false, 00:27:05.323 "get_zone_info": false, 00:27:05.323 "zone_management": false, 00:27:05.323 "zone_append": false, 00:27:05.323 "compare": true, 00:27:05.323 "compare_and_write": true, 00:27:05.323 "abort": true, 00:27:05.323 "seek_hole": false, 00:27:05.323 "seek_data": false, 00:27:05.323 "copy": true, 00:27:05.323 "nvme_iov_md": false 00:27:05.323 }, 00:27:05.323 "memory_domains": [ 00:27:05.323 { 00:27:05.323 "dma_device_id": "system", 00:27:05.323 "dma_device_type": 1 00:27:05.323 } 00:27:05.323 ], 00:27:05.323 "driver_specific": { 00:27:05.323 "nvme": [ 00:27:05.323 { 00:27:05.323 "trid": { 00:27:05.323 "trtype": "TCP", 00:27:05.323 "adrfam": "IPv4", 00:27:05.323 "traddr": "10.0.0.2", 00:27:05.323 "trsvcid": "4420", 00:27:05.323 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:05.323 }, 00:27:05.323 "ctrlr_data": { 00:27:05.323 "cntlid": 1, 00:27:05.323 "vendor_id": "0x8086", 00:27:05.323 "model_number": "SPDK bdev Controller", 00:27:05.323 "serial_number": "00000000000000000000", 00:27:05.323 "firmware_revision": "25.01", 00:27:05.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.324 "oacs": { 00:27:05.324 "security": 0, 00:27:05.324 "format": 0, 00:27:05.324 "firmware": 0, 00:27:05.324 "ns_manage": 0 00:27:05.324 }, 00:27:05.324 "multi_ctrlr": true, 00:27:05.324 "ana_reporting": false 00:27:05.324 }, 00:27:05.324 "vs": { 00:27:05.324 "nvme_version": "1.3" 00:27:05.324 }, 00:27:05.324 "ns_data": { 00:27:05.324 "id": 1, 00:27:05.324 "can_share": true 00:27:05.324 } 00:27:05.324 } 00:27:05.324 ], 00:27:05.324 "mp_policy": "active_passive" 00:27:05.324 } 00:27:05.324 } 00:27:05.324 ] 00:27:05.324 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.324 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:05.324 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.324 13:11:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.324 [2024-11-29 13:11:07.979820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:05.324 [2024-11-29 13:11:07.979915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaace0 (9): Bad file descriptor 00:27:05.585 [2024-11-29 13:11:08.112272] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 [ 00:27:05.585 { 00:27:05.585 "name": "nvme0n1", 00:27:05.585 "aliases": [ 00:27:05.585 "874f848b-592d-4362-9e79-8a135efc39c7" 00:27:05.585 ], 00:27:05.585 "product_name": "NVMe disk", 00:27:05.585 "block_size": 512, 00:27:05.585 "num_blocks": 2097152, 00:27:05.585 "uuid": "874f848b-592d-4362-9e79-8a135efc39c7", 00:27:05.585 "numa_id": 0, 00:27:05.585 "assigned_rate_limits": { 00:27:05.585 "rw_ios_per_sec": 0, 00:27:05.585 "rw_mbytes_per_sec": 0, 00:27:05.585 "r_mbytes_per_sec": 0, 00:27:05.585 "w_mbytes_per_sec": 0 00:27:05.585 }, 00:27:05.585 "claimed": false, 00:27:05.585 "zoned": false, 00:27:05.585 "supported_io_types": { 00:27:05.585 "read": true, 00:27:05.585 "write": true, 00:27:05.585 "unmap": false, 00:27:05.585 "flush": true, 00:27:05.585 "reset": true, 00:27:05.585 "nvme_admin": true, 00:27:05.585 "nvme_io": true, 00:27:05.585 "nvme_io_md": false, 00:27:05.585 "write_zeroes": true, 00:27:05.585 "zcopy": false, 00:27:05.585 "get_zone_info": false, 00:27:05.585 "zone_management": false, 00:27:05.585 "zone_append": false, 00:27:05.585 "compare": true, 00:27:05.585 "compare_and_write": true, 00:27:05.585 "abort": true, 00:27:05.585 "seek_hole": false, 00:27:05.585 "seek_data": false, 00:27:05.585 "copy": true, 00:27:05.585 "nvme_iov_md": false 00:27:05.585 }, 00:27:05.585 "memory_domains": [ 00:27:05.585 { 00:27:05.585 "dma_device_id": "system", 00:27:05.585 "dma_device_type": 1 00:27:05.585 } 00:27:05.585 ], 00:27:05.585 "driver_specific": { 00:27:05.585 "nvme": [ 00:27:05.585 { 00:27:05.585 "trid": { 00:27:05.585 "trtype": "TCP", 00:27:05.585 "adrfam": "IPv4", 00:27:05.585 "traddr": "10.0.0.2", 00:27:05.585 "trsvcid": "4420", 00:27:05.585 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:05.585 }, 00:27:05.585 "ctrlr_data": { 00:27:05.585 "cntlid": 2, 00:27:05.585 "vendor_id": "0x8086", 00:27:05.585 "model_number": "SPDK bdev Controller", 00:27:05.585 "serial_number": "00000000000000000000", 00:27:05.585 "firmware_revision": "25.01", 00:27:05.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.585 "oacs": { 00:27:05.585 "security": 0, 00:27:05.585 "format": 0, 00:27:05.585 "firmware": 0, 00:27:05.585 "ns_manage": 0 00:27:05.585 }, 00:27:05.585 "multi_ctrlr": true, 00:27:05.585 "ana_reporting": false 00:27:05.585 }, 00:27:05.585 "vs": { 00:27:05.585 "nvme_version": "1.3" 00:27:05.585 }, 00:27:05.585 "ns_data": { 00:27:05.585 "id": 1, 00:27:05.585 "can_share": true 00:27:05.585 } 00:27:05.585 } 00:27:05.585 ], 00:27:05.585 "mp_policy": "active_passive" 00:27:05.585 } 00:27:05.585 } 00:27:05.585 ] 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bAHKEYlzlS 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bAHKEYlzlS 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.bAHKEYlzlS 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 [2024-11-29 13:11:08.184479] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:05.585 [2024-11-29 13:11:08.184644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.585 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.585 [2024-11-29 13:11:08.200538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:05.846 nvme0n1 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.846 [ 00:27:05.846 { 00:27:05.846 "name": "nvme0n1", 00:27:05.846 "aliases": [ 00:27:05.846 "874f848b-592d-4362-9e79-8a135efc39c7" 00:27:05.846 ], 00:27:05.846 "product_name": "NVMe disk", 00:27:05.846 "block_size": 512, 00:27:05.846 "num_blocks": 2097152, 00:27:05.846 "uuid": "874f848b-592d-4362-9e79-8a135efc39c7", 00:27:05.846 "numa_id": 0, 00:27:05.846 "assigned_rate_limits": { 00:27:05.846 "rw_ios_per_sec": 0, 00:27:05.846 "rw_mbytes_per_sec": 0, 00:27:05.846 "r_mbytes_per_sec": 0, 00:27:05.846 "w_mbytes_per_sec": 0 00:27:05.846 }, 00:27:05.846 "claimed": false, 00:27:05.846 "zoned": false, 00:27:05.846 "supported_io_types": { 00:27:05.846 "read": true, 00:27:05.846 "write": true, 00:27:05.846 "unmap": false, 00:27:05.846 "flush": true, 00:27:05.846 "reset": true, 00:27:05.846 "nvme_admin": true, 00:27:05.846 "nvme_io": true, 00:27:05.846 "nvme_io_md": false, 00:27:05.846 "write_zeroes": true, 00:27:05.846 "zcopy": false, 00:27:05.846 "get_zone_info": false, 00:27:05.846 "zone_management": false, 00:27:05.846 "zone_append": false, 00:27:05.846 "compare": true, 00:27:05.846 "compare_and_write": true, 00:27:05.846 "abort": true, 00:27:05.846 "seek_hole": false, 00:27:05.846 "seek_data": false, 00:27:05.846 "copy": true, 00:27:05.846 "nvme_iov_md": false 00:27:05.846 }, 00:27:05.846 "memory_domains": [ 00:27:05.846 { 00:27:05.846 "dma_device_id": "system", 00:27:05.846 "dma_device_type": 1 00:27:05.846 } 00:27:05.846 ], 00:27:05.846 "driver_specific": { 00:27:05.846 "nvme": [ 00:27:05.846 { 00:27:05.846 "trid": { 00:27:05.846 "trtype": "TCP", 00:27:05.846 "adrfam": "IPv4", 00:27:05.846 "traddr": "10.0.0.2", 00:27:05.846 "trsvcid": "4421", 00:27:05.846 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:05.846 }, 00:27:05.846 "ctrlr_data": { 00:27:05.846 "cntlid": 3, 00:27:05.846 "vendor_id": "0x8086", 00:27:05.846 "model_number": "SPDK bdev Controller", 00:27:05.846 "serial_number": "00000000000000000000", 00:27:05.846 "firmware_revision": "25.01", 00:27:05.846 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.846 "oacs": { 00:27:05.846 "security": 0, 00:27:05.846 "format": 0, 00:27:05.846 "firmware": 0, 00:27:05.846 "ns_manage": 0 00:27:05.846 }, 00:27:05.846 "multi_ctrlr": true, 00:27:05.846 "ana_reporting": false 00:27:05.846 }, 00:27:05.846 "vs": { 00:27:05.846 "nvme_version": "1.3" 00:27:05.846 }, 00:27:05.846 "ns_data": { 00:27:05.846 "id": 1, 00:27:05.846 "can_share": true 00:27:05.846 } 00:27:05.846 } 00:27:05.846 ], 00:27:05.846 "mp_policy": "active_passive" 00:27:05.846 } 00:27:05.846 } 00:27:05.846 ] 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.bAHKEYlzlS 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.846 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.846 rmmod nvme_tcp 00:27:05.846 rmmod nvme_fabrics 00:27:05.846 rmmod nvme_keyring 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1015329 ']' 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1015329 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1015329 ']' 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1015329 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1015329 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1015329' 00:27:05.847 killing process with pid 1015329 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1015329 00:27:05.847 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1015329 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.108 13:11:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.020 13:11:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.020 00:27:08.020 real 0m11.758s 00:27:08.020 user 0m4.044s 00:27:08.020 sys 0m6.253s 00:27:08.020 13:11:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.020 13:11:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:08.020 ************************************ 00:27:08.020 END TEST nvmf_async_init 00:27:08.020 ************************************ 00:27:08.281 13:11:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:08.281 13:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:08.281 13:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.281 13:11:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.281 ************************************ 00:27:08.281 START TEST dma 00:27:08.281 ************************************ 00:27:08.281 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:08.281 * Looking for test storage... 00:27:08.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.281 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:08.281 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:27:08.281 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:08.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.542 --rc genhtml_branch_coverage=1 00:27:08.542 --rc genhtml_function_coverage=1 00:27:08.542 --rc genhtml_legend=1 00:27:08.542 --rc geninfo_all_blocks=1 00:27:08.542 --rc geninfo_unexecuted_blocks=1 00:27:08.542 00:27:08.542 ' 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:08.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.542 --rc genhtml_branch_coverage=1 00:27:08.542 --rc genhtml_function_coverage=1 00:27:08.542 --rc genhtml_legend=1 00:27:08.542 --rc geninfo_all_blocks=1 00:27:08.542 --rc geninfo_unexecuted_blocks=1 00:27:08.542 00:27:08.542 ' 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:08.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.542 --rc genhtml_branch_coverage=1 00:27:08.542 --rc genhtml_function_coverage=1 00:27:08.542 --rc genhtml_legend=1 00:27:08.542 --rc geninfo_all_blocks=1 00:27:08.542 --rc geninfo_unexecuted_blocks=1 00:27:08.542 00:27:08.542 ' 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:08.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.542 --rc genhtml_branch_coverage=1 00:27:08.542 --rc genhtml_function_coverage=1 00:27:08.542 --rc genhtml_legend=1 00:27:08.542 --rc geninfo_all_blocks=1 00:27:08.542 --rc geninfo_unexecuted_blocks=1 00:27:08.542 00:27:08.542 ' 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.542 13:11:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.542 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.542 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.542 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.542 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.542 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.542 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.542 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:08.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:08.543 00:27:08.543 real 0m0.242s 00:27:08.543 user 0m0.144s 00:27:08.543 sys 0m0.113s 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:08.543 ************************************ 00:27:08.543 END TEST dma 00:27:08.543 ************************************ 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.543 ************************************ 00:27:08.543 START TEST nvmf_identify 00:27:08.543 ************************************ 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:08.543 * Looking for test storage... 00:27:08.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:27:08.543 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:08.804 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:08.804 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.804 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.804 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.804 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.804 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.804 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.804 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.804 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:08.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.805 --rc genhtml_branch_coverage=1 00:27:08.805 --rc genhtml_function_coverage=1 00:27:08.805 --rc genhtml_legend=1 00:27:08.805 --rc geninfo_all_blocks=1 00:27:08.805 --rc geninfo_unexecuted_blocks=1 00:27:08.805 00:27:08.805 ' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:08.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.805 --rc genhtml_branch_coverage=1 00:27:08.805 --rc genhtml_function_coverage=1 00:27:08.805 --rc genhtml_legend=1 00:27:08.805 --rc geninfo_all_blocks=1 00:27:08.805 --rc geninfo_unexecuted_blocks=1 00:27:08.805 00:27:08.805 ' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:08.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.805 --rc genhtml_branch_coverage=1 00:27:08.805 --rc genhtml_function_coverage=1 00:27:08.805 --rc genhtml_legend=1 00:27:08.805 --rc geninfo_all_blocks=1 00:27:08.805 --rc geninfo_unexecuted_blocks=1 00:27:08.805 00:27:08.805 ' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:08.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.805 --rc genhtml_branch_coverage=1 00:27:08.805 --rc genhtml_function_coverage=1 00:27:08.805 --rc genhtml_legend=1 00:27:08.805 --rc geninfo_all_blocks=1 00:27:08.805 --rc geninfo_unexecuted_blocks=1 00:27:08.805 00:27:08.805 ' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:08.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:08.805 13:11:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:16.951 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:16.951 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.951 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:16.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:16.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:27:16.952 00:27:16.952 --- 10.0.0.2 ping statistics --- 00:27:16.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.952 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:27:16.952 00:27:16.952 --- 10.0.0.1 ping statistics --- 00:27:16.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.952 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1019997 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1019997 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1019997 ']' 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.952 13:11:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:16.952 [2024-11-29 13:11:18.972499] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:27:16.952 [2024-11-29 13:11:18.972566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.952 [2024-11-29 13:11:19.073783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.952 [2024-11-29 13:11:19.128032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.953 [2024-11-29 13:11:19.128092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.953 [2024-11-29 13:11:19.128101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.953 [2024-11-29 13:11:19.128108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.953 [2024-11-29 13:11:19.128114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.953 [2024-11-29 13:11:19.130537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.953 [2024-11-29 13:11:19.130696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.953 [2024-11-29 13:11:19.130856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.953 [2024-11-29 13:11:19.130857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.213 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.213 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:27:17.213 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.213 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.213 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.214 [2024-11-29 13:11:19.808000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.214 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.214 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:17.214 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:17.214 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.214 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.214 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.214 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.477 Malloc0 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.477 [2024-11-29 13:11:19.926960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:17.477 [ 00:27:17.477 { 00:27:17.477 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:17.477 "subtype": "Discovery", 00:27:17.477 "listen_addresses": [ 00:27:17.477 { 00:27:17.477 "trtype": "TCP", 00:27:17.477 "adrfam": "IPv4", 00:27:17.477 "traddr": "10.0.0.2", 00:27:17.477 "trsvcid": "4420" 00:27:17.477 } 00:27:17.477 ], 00:27:17.477 "allow_any_host": true, 00:27:17.477 "hosts": [] 00:27:17.477 }, 00:27:17.477 { 00:27:17.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.477 "subtype": "NVMe", 00:27:17.477 "listen_addresses": [ 00:27:17.477 { 00:27:17.477 "trtype": "TCP", 00:27:17.477 "adrfam": "IPv4", 00:27:17.477 "traddr": "10.0.0.2", 00:27:17.477 "trsvcid": "4420" 00:27:17.477 } 00:27:17.477 ], 00:27:17.477 "allow_any_host": true, 00:27:17.477 "hosts": [], 00:27:17.477 "serial_number": "SPDK00000000000001", 00:27:17.477 "model_number": "SPDK bdev Controller", 00:27:17.477 "max_namespaces": 32, 00:27:17.477 "min_cntlid": 1, 00:27:17.477 "max_cntlid": 65519, 00:27:17.477 "namespaces": [ 00:27:17.477 { 00:27:17.477 "nsid": 1, 00:27:17.477 "bdev_name": "Malloc0", 00:27:17.477 "name": "Malloc0", 00:27:17.477 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:17.477 "eui64": "ABCDEF0123456789", 00:27:17.477 "uuid": "17a02483-d03c-4fee-94e5-8dfe5e750bc6" 00:27:17.477 } 00:27:17.477 ] 00:27:17.477 } 00:27:17.477 ] 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.477 13:11:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:17.477 [2024-11-29 13:11:19.991639] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:27:17.477 [2024-11-29 13:11:19.991688] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020101 ] 00:27:17.477 [2024-11-29 13:11:20.047921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:27:17.477 [2024-11-29 13:11:20.047989] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:17.477 [2024-11-29 13:11:20.047996] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:17.477 [2024-11-29 13:11:20.048018] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:17.477 [2024-11-29 13:11:20.048029] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:17.477 [2024-11-29 13:11:20.051647] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:27:17.477 [2024-11-29 13:11:20.051703] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1411690 0 00:27:17.477 [2024-11-29 13:11:20.059179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:17.477 [2024-11-29 13:11:20.059197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:17.477 [2024-11-29 13:11:20.059202] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:17.477 [2024-11-29 13:11:20.059206] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:17.477 [2024-11-29 13:11:20.059256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.477 [2024-11-29 13:11:20.059263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.477 [2024-11-29 13:11:20.059268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.477 [2024-11-29 13:11:20.059286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:17.477 [2024-11-29 13:11:20.059309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.477 [2024-11-29 13:11:20.064171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.477 [2024-11-29 13:11:20.064181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.477 [2024-11-29 13:11:20.064186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.478 [2024-11-29 13:11:20.064212] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:17.478 [2024-11-29 13:11:20.064221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:27:17.478 [2024-11-29 13:11:20.064227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:27:17.478 [2024-11-29 13:11:20.064246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.478 [2024-11-29 13:11:20.064263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.478 [2024-11-29 13:11:20.064280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.478 [2024-11-29 13:11:20.064398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.478 [2024-11-29 13:11:20.064405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.478 [2024-11-29 13:11:20.064411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.478 [2024-11-29 13:11:20.064425] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:27:17.478 [2024-11-29 13:11:20.064433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:27:17.478 [2024-11-29 13:11:20.064441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.478 [2024-11-29 13:11:20.064456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.478 [2024-11-29 13:11:20.064467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.478 [2024-11-29 13:11:20.064580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.478 [2024-11-29 13:11:20.064586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.478 [2024-11-29 13:11:20.064590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.478 [2024-11-29 13:11:20.064599] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:27:17.478 [2024-11-29 13:11:20.064609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:17.478 [2024-11-29 13:11:20.064615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.478 [2024-11-29 13:11:20.064631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.478 [2024-11-29 13:11:20.064643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.478 [2024-11-29 13:11:20.064752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.478 [2024-11-29 13:11:20.064759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.478 [2024-11-29 13:11:20.064762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.478 [2024-11-29 13:11:20.064779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:17.478 [2024-11-29 13:11:20.064789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.478 [2024-11-29 13:11:20.064804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.478 [2024-11-29 13:11:20.064815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.478 [2024-11-29 13:11:20.064886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.478 [2024-11-29 13:11:20.064892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.478 [2024-11-29 13:11:20.064896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.064900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.478 [2024-11-29 13:11:20.064905] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:17.478 [2024-11-29 13:11:20.064910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:17.478 [2024-11-29 13:11:20.064919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:17.478 [2024-11-29 13:11:20.065029] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:27:17.478 [2024-11-29 13:11:20.065034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:17.478 [2024-11-29 13:11:20.065044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.065048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.065052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.478 [2024-11-29 13:11:20.065059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.478 [2024-11-29 13:11:20.065070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.478 [2024-11-29 13:11:20.065179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.478 [2024-11-29 13:11:20.065187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.478 [2024-11-29 13:11:20.065192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.065196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.478 [2024-11-29 13:11:20.065201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:17.478 [2024-11-29 13:11:20.065211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.065215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.065219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.478 [2024-11-29 13:11:20.065225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.478 [2024-11-29 13:11:20.065236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.478 [2024-11-29 13:11:20.065345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.478 [2024-11-29 13:11:20.065352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.478 [2024-11-29 13:11:20.065358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.065362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.478 [2024-11-29 13:11:20.065367] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:17.478 [2024-11-29 13:11:20.065372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:17.478 [2024-11-29 13:11:20.065380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:27:17.478 [2024-11-29 13:11:20.065394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:17.478 [2024-11-29 13:11:20.065404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.478 [2024-11-29 13:11:20.065408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.478 [2024-11-29 13:11:20.065415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.478 [2024-11-29 13:11:20.065427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.478 [2024-11-29 13:11:20.065547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:17.479 [2024-11-29 13:11:20.065554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:17.479 [2024-11-29 13:11:20.065558] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.065562] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1411690): datao=0, datal=4096, cccid=0 00:27:17.479 [2024-11-29 13:11:20.065567] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1473100) on tqpair(0x1411690): expected_datao=0, payload_size=4096 00:27:17.479 [2024-11-29 13:11:20.065572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.065589] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.065595] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.479 [2024-11-29 13:11:20.111188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.479 [2024-11-29 13:11:20.111192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.479 [2024-11-29 13:11:20.111207] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:27:17.479 [2024-11-29 13:11:20.111213] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:27:17.479 [2024-11-29 13:11:20.111218] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:27:17.479 [2024-11-29 13:11:20.111223] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:27:17.479 [2024-11-29 13:11:20.111228] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:27:17.479 [2024-11-29 13:11:20.111233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:27:17.479 [2024-11-29 13:11:20.111243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:17.479 [2024-11-29 13:11:20.111251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.479 [2024-11-29 13:11:20.111273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:17.479 [2024-11-29 13:11:20.111287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.479 [2024-11-29 13:11:20.111551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.479 [2024-11-29 13:11:20.111558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.479 [2024-11-29 13:11:20.111561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.479 [2024-11-29 13:11:20.111573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1411690) 00:27:17.479 [2024-11-29 13:11:20.111589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.479 [2024-11-29 13:11:20.111595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1411690) 00:27:17.479 [2024-11-29 13:11:20.111608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.479 [2024-11-29 13:11:20.111615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1411690) 00:27:17.479 [2024-11-29 13:11:20.111628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.479 [2024-11-29 13:11:20.111635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1411690) 00:27:17.479 [2024-11-29 13:11:20.111649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.479 [2024-11-29 13:11:20.111654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:17.479 [2024-11-29 13:11:20.111668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:17.479 [2024-11-29 13:11:20.111675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1411690) 00:27:17.479 [2024-11-29 13:11:20.111686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-11-29 13:11:20.111699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473100, cid 0, qid 0 00:27:17.479 [2024-11-29 13:11:20.111705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473280, cid 1, qid 0 00:27:17.479 [2024-11-29 13:11:20.111710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473400, cid 2, qid 0 00:27:17.479 [2024-11-29 13:11:20.111714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473580, cid 3, qid 0 00:27:17.479 [2024-11-29 13:11:20.111719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473700, cid 4, qid 0 00:27:17.479 [2024-11-29 13:11:20.111842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.479 [2024-11-29 13:11:20.111851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.479 [2024-11-29 13:11:20.111854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473700) on tqpair=0x1411690 00:27:17.479 [2024-11-29 13:11:20.111864] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:27:17.479 [2024-11-29 13:11:20.111871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:27:17.479 [2024-11-29 13:11:20.111882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.111886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1411690) 00:27:17.479 [2024-11-29 13:11:20.111893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.479 [2024-11-29 13:11:20.111904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473700, cid 4, qid 0 00:27:17.479 [2024-11-29 13:11:20.112024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:17.479 [2024-11-29 13:11:20.112031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:17.479 [2024-11-29 13:11:20.112035] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.112039] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1411690): datao=0, datal=4096, cccid=4 00:27:17.479 [2024-11-29 13:11:20.112043] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1473700) on tqpair(0x1411690): expected_datao=0, payload_size=4096 00:27:17.479 [2024-11-29 13:11:20.112048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.112056] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.112060] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.112128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.479 [2024-11-29 13:11:20.112134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.479 [2024-11-29 13:11:20.112138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.112142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473700) on tqpair=0x1411690 00:27:17.479 [2024-11-29 13:11:20.112156] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:27:17.479 [2024-11-29 13:11:20.112195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.479 [2024-11-29 13:11:20.112200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1411690) 00:27:17.480 [2024-11-29 13:11:20.112207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-11-29 13:11:20.112215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.112219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.112223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1411690) 00:27:17.480 [2024-11-29 13:11:20.112229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.480 [2024-11-29 13:11:20.112244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473700, cid 4, qid 0 00:27:17.480 [2024-11-29 13:11:20.112250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473880, cid 5, qid 0 00:27:17.480 [2024-11-29 13:11:20.112408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:17.480 [2024-11-29 13:11:20.112414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:17.480 [2024-11-29 13:11:20.112419] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.112426] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1411690): datao=0, datal=1024, cccid=4 00:27:17.480 [2024-11-29 13:11:20.112430] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1473700) on tqpair(0x1411690): expected_datao=0, payload_size=1024 00:27:17.480 [2024-11-29 13:11:20.112435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.112442] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.112445] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.112451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.480 [2024-11-29 13:11:20.112457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.480 [2024-11-29 13:11:20.112460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.112464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473880) on tqpair=0x1411690 00:27:17.480 [2024-11-29 13:11:20.153356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.480 [2024-11-29 13:11:20.153370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.480 [2024-11-29 13:11:20.153374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.153378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473700) on tqpair=0x1411690 00:27:17.480 [2024-11-29 13:11:20.153391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.153395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1411690) 00:27:17.480 [2024-11-29 13:11:20.153403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.480 [2024-11-29 13:11:20.153421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473700, cid 4, qid 0 00:27:17.480 [2024-11-29 13:11:20.153673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:17.480 [2024-11-29 13:11:20.153681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:17.480 [2024-11-29 13:11:20.153684] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.153688] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1411690): datao=0, datal=3072, cccid=4 00:27:17.480 [2024-11-29 13:11:20.153692] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1473700) on tqpair(0x1411690): expected_datao=0, payload_size=3072 00:27:17.480 [2024-11-29 13:11:20.153697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.153718] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:17.480 [2024-11-29 13:11:20.153722] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:17.742 [2024-11-29 13:11:20.198174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.742 [2024-11-29 13:11:20.198188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.742 [2024-11-29 13:11:20.198192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.742 [2024-11-29 13:11:20.198196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473700) on tqpair=0x1411690 00:27:17.742 [2024-11-29 13:11:20.198207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.742 [2024-11-29 13:11:20.198212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1411690) 00:27:17.742 [2024-11-29 13:11:20.198219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.742 [2024-11-29 13:11:20.198236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473700, cid 4, qid 0 00:27:17.742 [2024-11-29 13:11:20.198414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:17.742 [2024-11-29 13:11:20.198421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:17.742 [2024-11-29 13:11:20.198425] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:17.742 [2024-11-29 13:11:20.198428] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1411690): datao=0, datal=8, cccid=4 00:27:17.742 [2024-11-29 13:11:20.198438] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1473700) on tqpair(0x1411690): expected_datao=0, payload_size=8 00:27:17.742 [2024-11-29 13:11:20.198442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.742 [2024-11-29 13:11:20.198449] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:17.742 [2024-11-29 13:11:20.198452] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:17.742 [2024-11-29 13:11:20.240316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.742 [2024-11-29 13:11:20.240328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.742 [2024-11-29 13:11:20.240332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.742 [2024-11-29 13:11:20.240336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473700) on tqpair=0x1411690 00:27:17.742 ===================================================== 00:27:17.742 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:17.742 ===================================================== 00:27:17.742 Controller Capabilities/Features 00:27:17.742 ================================ 00:27:17.742 Vendor ID: 0000 00:27:17.742 Subsystem Vendor ID: 0000 00:27:17.742 Serial Number: .................... 00:27:17.743 Model Number: ........................................ 00:27:17.743 Firmware Version: 25.01 00:27:17.743 Recommended Arb Burst: 0 00:27:17.743 IEEE OUI Identifier: 00 00 00 00:27:17.743 Multi-path I/O 00:27:17.743 May have multiple subsystem ports: No 00:27:17.743 May have multiple controllers: No 00:27:17.743 Associated with SR-IOV VF: No 00:27:17.743 Max Data Transfer Size: 131072 00:27:17.743 Max Number of Namespaces: 0 00:27:17.743 Max Number of I/O Queues: 1024 00:27:17.743 NVMe Specification Version (VS): 1.3 00:27:17.743 NVMe Specification Version (Identify): 1.3 00:27:17.743 Maximum Queue Entries: 128 00:27:17.743 Contiguous Queues Required: Yes 00:27:17.743 Arbitration Mechanisms Supported 00:27:17.743 Weighted Round Robin: Not Supported 00:27:17.743 Vendor Specific: Not Supported 00:27:17.743 Reset Timeout: 15000 ms 00:27:17.743 Doorbell Stride: 4 bytes 00:27:17.743 NVM Subsystem Reset: Not Supported 00:27:17.743 Command Sets Supported 00:27:17.743 NVM Command Set: Supported 00:27:17.743 Boot Partition: Not Supported 00:27:17.743 Memory Page Size Minimum: 4096 bytes 00:27:17.743 Memory Page Size Maximum: 4096 bytes 00:27:17.743 Persistent Memory Region: Not Supported 00:27:17.743 Optional Asynchronous Events Supported 00:27:17.743 Namespace Attribute Notices: Not Supported 00:27:17.743 Firmware Activation Notices: Not Supported 00:27:17.743 ANA Change Notices: Not Supported 00:27:17.743 PLE Aggregate Log Change Notices: Not Supported 00:27:17.743 LBA Status Info Alert Notices: Not Supported 00:27:17.743 EGE Aggregate Log Change Notices: Not Supported 00:27:17.743 Normal NVM Subsystem Shutdown event: Not Supported 00:27:17.743 Zone Descriptor Change Notices: Not Supported 00:27:17.743 Discovery Log Change Notices: Supported 00:27:17.743 Controller Attributes 00:27:17.743 128-bit Host Identifier: Not Supported 00:27:17.743 Non-Operational Permissive Mode: Not Supported 00:27:17.743 NVM Sets: Not Supported 00:27:17.743 Read Recovery Levels: Not Supported 00:27:17.743 Endurance Groups: Not Supported 00:27:17.743 Predictable Latency Mode: Not Supported 00:27:17.743 Traffic Based Keep ALive: Not Supported 00:27:17.743 Namespace Granularity: Not Supported 00:27:17.743 SQ Associations: Not Supported 00:27:17.743 UUID List: Not Supported 00:27:17.743 Multi-Domain Subsystem: Not Supported 00:27:17.743 Fixed Capacity Management: Not Supported 00:27:17.743 Variable Capacity Management: Not Supported 00:27:17.743 Delete Endurance Group: Not Supported 00:27:17.743 Delete NVM Set: Not Supported 00:27:17.743 Extended LBA Formats Supported: Not Supported 00:27:17.743 Flexible Data Placement Supported: Not Supported 00:27:17.743 00:27:17.743 Controller Memory Buffer Support 00:27:17.743 ================================ 00:27:17.743 Supported: No 00:27:17.743 00:27:17.743 Persistent Memory Region Support 00:27:17.743 ================================ 00:27:17.743 Supported: No 00:27:17.743 00:27:17.743 Admin Command Set Attributes 00:27:17.743 ============================ 00:27:17.743 Security Send/Receive: Not Supported 00:27:17.743 Format NVM: Not Supported 00:27:17.743 Firmware Activate/Download: Not Supported 00:27:17.743 Namespace Management: Not Supported 00:27:17.743 Device Self-Test: Not Supported 00:27:17.743 Directives: Not Supported 00:27:17.743 NVMe-MI: Not Supported 00:27:17.743 Virtualization Management: Not Supported 00:27:17.743 Doorbell Buffer Config: Not Supported 00:27:17.743 Get LBA Status Capability: Not Supported 00:27:17.743 Command & Feature Lockdown Capability: Not Supported 00:27:17.743 Abort Command Limit: 1 00:27:17.743 Async Event Request Limit: 4 00:27:17.743 Number of Firmware Slots: N/A 00:27:17.743 Firmware Slot 1 Read-Only: N/A 00:27:17.743 Firmware Activation Without Reset: N/A 00:27:17.743 Multiple Update Detection Support: N/A 00:27:17.743 Firmware Update Granularity: No Information Provided 00:27:17.743 Per-Namespace SMART Log: No 00:27:17.743 Asymmetric Namespace Access Log Page: Not Supported 00:27:17.743 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:17.743 Command Effects Log Page: Not Supported 00:27:17.743 Get Log Page Extended Data: Supported 00:27:17.743 Telemetry Log Pages: Not Supported 00:27:17.743 Persistent Event Log Pages: Not Supported 00:27:17.743 Supported Log Pages Log Page: May Support 00:27:17.743 Commands Supported & Effects Log Page: Not Supported 00:27:17.743 Feature Identifiers & Effects Log Page:May Support 00:27:17.743 NVMe-MI Commands & Effects Log Page: May Support 00:27:17.743 Data Area 4 for Telemetry Log: Not Supported 00:27:17.743 Error Log Page Entries Supported: 128 00:27:17.743 Keep Alive: Not Supported 00:27:17.743 00:27:17.743 NVM Command Set Attributes 00:27:17.743 ========================== 00:27:17.743 Submission Queue Entry Size 00:27:17.743 Max: 1 00:27:17.743 Min: 1 00:27:17.743 Completion Queue Entry Size 00:27:17.743 Max: 1 00:27:17.743 Min: 1 00:27:17.743 Number of Namespaces: 0 00:27:17.743 Compare Command: Not Supported 00:27:17.743 Write Uncorrectable Command: Not Supported 00:27:17.743 Dataset Management Command: Not Supported 00:27:17.743 Write Zeroes Command: Not Supported 00:27:17.743 Set Features Save Field: Not Supported 00:27:17.743 Reservations: Not Supported 00:27:17.743 Timestamp: Not Supported 00:27:17.743 Copy: Not Supported 00:27:17.743 Volatile Write Cache: Not Present 00:27:17.743 Atomic Write Unit (Normal): 1 00:27:17.743 Atomic Write Unit (PFail): 1 00:27:17.743 Atomic Compare & Write Unit: 1 00:27:17.743 Fused Compare & Write: Supported 00:27:17.743 Scatter-Gather List 00:27:17.743 SGL Command Set: Supported 00:27:17.743 SGL Keyed: Supported 00:27:17.743 SGL Bit Bucket Descriptor: Not Supported 00:27:17.743 SGL Metadata Pointer: Not Supported 00:27:17.743 Oversized SGL: Not Supported 00:27:17.743 SGL Metadata Address: Not Supported 00:27:17.743 SGL Offset: Supported 00:27:17.743 Transport SGL Data Block: Not Supported 00:27:17.743 Replay Protected Memory Block: Not Supported 00:27:17.743 00:27:17.743 Firmware Slot Information 00:27:17.743 ========================= 00:27:17.743 Active slot: 0 00:27:17.743 00:27:17.743 00:27:17.743 Error Log 00:27:17.743 ========= 00:27:17.743 00:27:17.743 Active Namespaces 00:27:17.743 ================= 00:27:17.743 Discovery Log Page 00:27:17.743 ================== 00:27:17.743 Generation Counter: 2 00:27:17.743 Number of Records: 2 00:27:17.743 Record Format: 0 00:27:17.743 00:27:17.743 Discovery Log Entry 0 00:27:17.743 ---------------------- 00:27:17.743 Transport Type: 3 (TCP) 00:27:17.743 Address Family: 1 (IPv4) 00:27:17.744 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:17.744 Entry Flags: 00:27:17.744 Duplicate Returned Information: 1 00:27:17.744 Explicit Persistent Connection Support for Discovery: 1 00:27:17.744 Transport Requirements: 00:27:17.744 Secure Channel: Not Required 00:27:17.744 Port ID: 0 (0x0000) 00:27:17.744 Controller ID: 65535 (0xffff) 00:27:17.744 Admin Max SQ Size: 128 00:27:17.744 Transport Service Identifier: 4420 00:27:17.744 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:17.744 Transport Address: 10.0.0.2 00:27:17.744 Discovery Log Entry 1 00:27:17.744 ---------------------- 00:27:17.744 Transport Type: 3 (TCP) 00:27:17.744 Address Family: 1 (IPv4) 00:27:17.744 Subsystem Type: 2 (NVM Subsystem) 00:27:17.744 Entry Flags: 00:27:17.744 Duplicate Returned Information: 0 00:27:17.744 Explicit Persistent Connection Support for Discovery: 0 00:27:17.744 Transport Requirements: 00:27:17.744 Secure Channel: Not Required 00:27:17.744 Port ID: 0 (0x0000) 00:27:17.744 Controller ID: 65535 (0xffff) 00:27:17.744 Admin Max SQ Size: 128 00:27:17.744 Transport Service Identifier: 4420 00:27:17.744 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:17.744 Transport Address: 10.0.0.2 [2024-11-29 13:11:20.240444] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:27:17.744 [2024-11-29 13:11:20.240456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473100) on tqpair=0x1411690 00:27:17.744 [2024-11-29 13:11:20.240463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.744 [2024-11-29 13:11:20.240469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473280) on tqpair=0x1411690 00:27:17.744 [2024-11-29 13:11:20.240474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.744 [2024-11-29 13:11:20.240479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473400) on tqpair=0x1411690 00:27:17.744 [2024-11-29 13:11:20.240484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.744 [2024-11-29 13:11:20.240489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473580) on tqpair=0x1411690 00:27:17.744 [2024-11-29 13:11:20.240493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.744 [2024-11-29 13:11:20.240503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.240507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.240511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1411690) 00:27:17.744 [2024-11-29 13:11:20.240519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.744 [2024-11-29 13:11:20.240534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473580, cid 3, qid 0 00:27:17.744 [2024-11-29 13:11:20.240628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.744 [2024-11-29 13:11:20.240635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.744 [2024-11-29 13:11:20.240639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.240642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473580) on tqpair=0x1411690 00:27:17.744 [2024-11-29 13:11:20.240650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.240654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.240657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1411690) 00:27:17.744 [2024-11-29 13:11:20.240664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.744 [2024-11-29 13:11:20.240677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473580, cid 3, qid 0 00:27:17.744 [2024-11-29 13:11:20.240906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.744 [2024-11-29 13:11:20.240912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.744 [2024-11-29 13:11:20.240915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.240919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473580) on tqpair=0x1411690 00:27:17.744 [2024-11-29 13:11:20.240928] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:27:17.744 [2024-11-29 13:11:20.240935] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:27:17.744 [2024-11-29 13:11:20.240945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.240949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.240952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1411690) 00:27:17.744 [2024-11-29 13:11:20.240959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.744 [2024-11-29 13:11:20.240970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473580, cid 3, qid 0 00:27:17.744 [2024-11-29 13:11:20.241146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.744 [2024-11-29 13:11:20.241153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.744 [2024-11-29 13:11:20.241156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.241167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473580) on tqpair=0x1411690 00:27:17.744 [2024-11-29 13:11:20.241178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.241182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.241185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1411690) 00:27:17.744 [2024-11-29 13:11:20.241192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.744 [2024-11-29 13:11:20.241203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473580, cid 3, qid 0 00:27:17.744 [2024-11-29 13:11:20.241377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.744 [2024-11-29 13:11:20.241383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.744 [2024-11-29 13:11:20.241387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.241391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473580) on tqpair=0x1411690 00:27:17.744 [2024-11-29 13:11:20.241401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.241405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.241408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1411690) 00:27:17.744 [2024-11-29 13:11:20.241415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.744 [2024-11-29 13:11:20.241425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473580, cid 3, qid 0 00:27:17.744 [2024-11-29 13:11:20.241607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.744 [2024-11-29 13:11:20.241614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.744 [2024-11-29 13:11:20.241617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.241621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473580) on tqpair=0x1411690 00:27:17.744 [2024-11-29 13:11:20.241631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.241635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.744 [2024-11-29 13:11:20.241639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1411690) 00:27:17.744 [2024-11-29 13:11:20.241645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.744 [2024-11-29 13:11:20.241655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473580, cid 3, qid 0 00:27:17.744 [2024-11-29 13:11:20.241842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.744 [2024-11-29 13:11:20.241848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.744 [2024-11-29 13:11:20.241854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.241858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473580) on tqpair=0x1411690 00:27:17.745 [2024-11-29 13:11:20.241869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.241873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.241877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1411690) 00:27:17.745 [2024-11-29 13:11:20.241883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.745 [2024-11-29 13:11:20.241894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473580, cid 3, qid 0 00:27:17.745 [2024-11-29 13:11:20.242074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.745 [2024-11-29 13:11:20.242080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.745 [2024-11-29 13:11:20.242084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.242088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473580) on tqpair=0x1411690 00:27:17.745 [2024-11-29 13:11:20.242097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.242101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.242105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1411690) 00:27:17.745 [2024-11-29 13:11:20.242112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.745 [2024-11-29 13:11:20.242122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1473580, cid 3, qid 0 00:27:17.745 [2024-11-29 13:11:20.246170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.745 [2024-11-29 13:11:20.246180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.745 [2024-11-29 13:11:20.246183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.246187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1473580) on tqpair=0x1411690 00:27:17.745 [2024-11-29 13:11:20.246195] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:27:17.745 00:27:17.745 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:17.745 [2024-11-29 13:11:20.293003] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:27:17.745 [2024-11-29 13:11:20.293050] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020139 ] 00:27:17.745 [2024-11-29 13:11:20.349715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:27:17.745 [2024-11-29 13:11:20.349780] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:17.745 [2024-11-29 13:11:20.349785] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:17.745 [2024-11-29 13:11:20.349806] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:17.745 [2024-11-29 13:11:20.349817] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:17.745 [2024-11-29 13:11:20.350581] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:27:17.745 [2024-11-29 13:11:20.350634] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcc2690 0 00:27:17.745 [2024-11-29 13:11:20.361172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:17.745 [2024-11-29 13:11:20.361187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:17.745 [2024-11-29 13:11:20.361192] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:17.745 [2024-11-29 13:11:20.361196] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:17.745 [2024-11-29 13:11:20.361234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.361240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.361244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.745 [2024-11-29 13:11:20.361258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:17.745 [2024-11-29 13:11:20.361282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.745 [2024-11-29 13:11:20.368172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.745 [2024-11-29 13:11:20.368184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.745 [2024-11-29 13:11:20.368188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.368193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:17.745 [2024-11-29 13:11:20.368203] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:17.745 [2024-11-29 13:11:20.368210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:27:17.745 [2024-11-29 13:11:20.368216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:27:17.745 [2024-11-29 13:11:20.368232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.368237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.368240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.745 [2024-11-29 13:11:20.368249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.745 [2024-11-29 13:11:20.368266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.745 [2024-11-29 13:11:20.368444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.745 [2024-11-29 13:11:20.368451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.745 [2024-11-29 13:11:20.368454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.368458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:17.745 [2024-11-29 13:11:20.368466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:27:17.745 [2024-11-29 13:11:20.368474] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:27:17.745 [2024-11-29 13:11:20.368482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.368486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.368489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.745 [2024-11-29 13:11:20.368497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.745 [2024-11-29 13:11:20.368507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.745 [2024-11-29 13:11:20.368683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.745 [2024-11-29 13:11:20.368690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.745 [2024-11-29 13:11:20.368693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.368701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:17.745 [2024-11-29 13:11:20.368707] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:27:17.745 [2024-11-29 13:11:20.368716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:17.745 [2024-11-29 13:11:20.368723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.368726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.368730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.745 [2024-11-29 13:11:20.368737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.745 [2024-11-29 13:11:20.368748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.745 [2024-11-29 13:11:20.369004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.745 [2024-11-29 13:11:20.369011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.745 [2024-11-29 13:11:20.369014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.369018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:17.745 [2024-11-29 13:11:20.369023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:17.745 [2024-11-29 13:11:20.369033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.745 [2024-11-29 13:11:20.369037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.369041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.746 [2024-11-29 13:11:20.369048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.746 [2024-11-29 13:11:20.369058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.746 [2024-11-29 13:11:20.369228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.746 [2024-11-29 13:11:20.369235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.746 [2024-11-29 13:11:20.369238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.369242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:17.746 [2024-11-29 13:11:20.369247] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:17.746 [2024-11-29 13:11:20.369252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:17.746 [2024-11-29 13:11:20.369261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:17.746 [2024-11-29 13:11:20.369370] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:27:17.746 [2024-11-29 13:11:20.369374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:17.746 [2024-11-29 13:11:20.369383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.369387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.369390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.746 [2024-11-29 13:11:20.369397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.746 [2024-11-29 13:11:20.369409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.746 [2024-11-29 13:11:20.369606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.746 [2024-11-29 13:11:20.369615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.746 [2024-11-29 13:11:20.369619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.369622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:17.746 [2024-11-29 13:11:20.369627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:17.746 [2024-11-29 13:11:20.369638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.369642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.369646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.746 [2024-11-29 13:11:20.369653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.746 [2024-11-29 13:11:20.369663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.746 [2024-11-29 13:11:20.369885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.746 [2024-11-29 13:11:20.369891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.746 [2024-11-29 13:11:20.369895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.369898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:17.746 [2024-11-29 13:11:20.369903] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:17.746 [2024-11-29 13:11:20.369908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:17.746 [2024-11-29 13:11:20.369916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:27:17.746 [2024-11-29 13:11:20.369933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:17.746 [2024-11-29 13:11:20.369943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.369946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.746 [2024-11-29 13:11:20.369954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.746 [2024-11-29 13:11:20.369964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.746 [2024-11-29 13:11:20.370213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:17.746 [2024-11-29 13:11:20.370220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:17.746 [2024-11-29 13:11:20.370224] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.370228] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc2690): datao=0, datal=4096, cccid=0 00:27:17.746 [2024-11-29 13:11:20.370233] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd24100) on tqpair(0xcc2690): expected_datao=0, payload_size=4096 00:27:17.746 [2024-11-29 13:11:20.370237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.370253] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.370258] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.415173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.746 [2024-11-29 13:11:20.415185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.746 [2024-11-29 13:11:20.415188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.415193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:17.746 [2024-11-29 13:11:20.415202] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:27:17.746 [2024-11-29 13:11:20.415211] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:27:17.746 [2024-11-29 13:11:20.415216] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:27:17.746 [2024-11-29 13:11:20.415220] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:27:17.746 [2024-11-29 13:11:20.415225] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:27:17.746 [2024-11-29 13:11:20.415230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:27:17.746 [2024-11-29 13:11:20.415239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:17.746 [2024-11-29 13:11:20.415246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.415250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.415254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.746 [2024-11-29 13:11:20.415262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:17.746 [2024-11-29 13:11:20.415276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.746 [2024-11-29 13:11:20.415490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.746 [2024-11-29 13:11:20.415497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.746 [2024-11-29 13:11:20.415500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.415504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:17.746 [2024-11-29 13:11:20.415512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.415516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.415519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc2690) 00:27:17.746 [2024-11-29 13:11:20.415526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.746 [2024-11-29 13:11:20.415532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.415536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.746 [2024-11-29 13:11:20.415539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcc2690) 00:27:17.747 [2024-11-29 13:11:20.415545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.747 [2024-11-29 13:11:20.415552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.415555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.415559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcc2690) 00:27:17.747 [2024-11-29 13:11:20.415565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.747 [2024-11-29 13:11:20.415571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.415575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.415578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc2690) 00:27:17.747 [2024-11-29 13:11:20.415584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.747 [2024-11-29 13:11:20.415589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:17.747 [2024-11-29 13:11:20.415601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:17.747 [2024-11-29 13:11:20.415610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.415614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc2690) 00:27:17.747 [2024-11-29 13:11:20.415621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.747 [2024-11-29 13:11:20.415634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24100, cid 0, qid 0 00:27:17.747 [2024-11-29 13:11:20.415639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24280, cid 1, qid 0 00:27:17.747 [2024-11-29 13:11:20.415644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24400, cid 2, qid 0 00:27:17.747 [2024-11-29 13:11:20.415649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24580, cid 3, qid 0 00:27:17.747 [2024-11-29 13:11:20.415654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24700, cid 4, qid 0 00:27:17.747 [2024-11-29 13:11:20.415897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.747 [2024-11-29 13:11:20.415903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.747 [2024-11-29 13:11:20.415907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.415911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24700) on tqpair=0xcc2690 00:27:17.747 [2024-11-29 13:11:20.415916] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:27:17.747 [2024-11-29 13:11:20.415921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:17.747 [2024-11-29 13:11:20.415932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:27:17.747 [2024-11-29 13:11:20.415939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:17.747 [2024-11-29 13:11:20.415945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.415949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.415953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc2690) 00:27:17.747 [2024-11-29 13:11:20.415960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:17.747 [2024-11-29 13:11:20.415972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24700, cid 4, qid 0 00:27:17.747 [2024-11-29 13:11:20.416155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:17.747 [2024-11-29 13:11:20.416168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:17.747 [2024-11-29 13:11:20.416171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.416175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24700) on tqpair=0xcc2690 00:27:17.747 [2024-11-29 13:11:20.416244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:27:17.747 [2024-11-29 13:11:20.416254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:17.747 [2024-11-29 13:11:20.416262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.416266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc2690) 00:27:17.747 [2024-11-29 13:11:20.416273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.747 [2024-11-29 13:11:20.416284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24700, cid 4, qid 0 00:27:17.747 [2024-11-29 13:11:20.416522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:17.747 [2024-11-29 13:11:20.416531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:17.747 [2024-11-29 13:11:20.416535] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.416539] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc2690): datao=0, datal=4096, cccid=4 00:27:17.747 [2024-11-29 13:11:20.416544] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd24700) on tqpair(0xcc2690): expected_datao=0, payload_size=4096 00:27:17.747 [2024-11-29 13:11:20.416548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.416563] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:17.747 [2024-11-29 13:11:20.416567] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.009 [2024-11-29 13:11:20.457313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.009 [2024-11-29 13:11:20.457325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.009 [2024-11-29 13:11:20.457328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.009 [2024-11-29 13:11:20.457332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24700) on tqpair=0xcc2690 00:27:18.009 [2024-11-29 13:11:20.457348] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:27:18.009 [2024-11-29 13:11:20.457358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:27:18.009 [2024-11-29 13:11:20.457368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:27:18.009 [2024-11-29 13:11:20.457375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.009 [2024-11-29 13:11:20.457379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc2690) 00:27:18.009 [2024-11-29 13:11:20.457387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.009 [2024-11-29 13:11:20.457399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24700, cid 4, qid 0 00:27:18.010 [2024-11-29 13:11:20.457611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.010 [2024-11-29 13:11:20.457618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.010 [2024-11-29 13:11:20.457622] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.457625] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc2690): datao=0, datal=4096, cccid=4 00:27:18.010 [2024-11-29 13:11:20.457630] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd24700) on tqpair(0xcc2690): expected_datao=0, payload_size=4096 00:27:18.010 [2024-11-29 13:11:20.457634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.457641] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.457645] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.457782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.010 [2024-11-29 13:11:20.457788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.010 [2024-11-29 13:11:20.457792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.457796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24700) on tqpair=0xcc2690 00:27:18.010 [2024-11-29 13:11:20.457806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:18.010 [2024-11-29 13:11:20.457816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:18.010 [2024-11-29 13:11:20.457823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.457826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.457833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.010 [2024-11-29 13:11:20.457847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24700, cid 4, qid 0 00:27:18.010 [2024-11-29 13:11:20.458061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.010 [2024-11-29 13:11:20.458067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.010 [2024-11-29 13:11:20.458071] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.458075] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc2690): datao=0, datal=4096, cccid=4 00:27:18.010 [2024-11-29 13:11:20.458079] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd24700) on tqpair(0xcc2690): expected_datao=0, payload_size=4096 00:27:18.010 [2024-11-29 13:11:20.458084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.458097] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.458101] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.010 [2024-11-29 13:11:20.503187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.010 [2024-11-29 13:11:20.503191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24700) on tqpair=0xcc2690 00:27:18.010 [2024-11-29 13:11:20.503213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:18.010 [2024-11-29 13:11:20.503222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:27:18.010 [2024-11-29 13:11:20.503231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:27:18.010 [2024-11-29 13:11:20.503238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:18.010 [2024-11-29 13:11:20.503243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:18.010 [2024-11-29 13:11:20.503249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:27:18.010 [2024-11-29 13:11:20.503254] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:27:18.010 [2024-11-29 13:11:20.503259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:27:18.010 [2024-11-29 13:11:20.503264] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:27:18.010 [2024-11-29 13:11:20.503283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.503295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.010 [2024-11-29 13:11:20.503303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.503316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.010 [2024-11-29 13:11:20.503333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24700, cid 4, qid 0 00:27:18.010 [2024-11-29 13:11:20.503339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24880, cid 5, qid 0 00:27:18.010 [2024-11-29 13:11:20.503598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.010 [2024-11-29 13:11:20.503605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.010 [2024-11-29 13:11:20.503608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24700) on tqpair=0xcc2690 00:27:18.010 [2024-11-29 13:11:20.503620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.010 [2024-11-29 13:11:20.503626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.010 [2024-11-29 13:11:20.503629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24880) on tqpair=0xcc2690 00:27:18.010 [2024-11-29 13:11:20.503643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.503653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.010 [2024-11-29 13:11:20.503664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24880, cid 5, qid 0 00:27:18.010 [2024-11-29 13:11:20.503849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.010 [2024-11-29 13:11:20.503856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.010 [2024-11-29 13:11:20.503859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24880) on tqpair=0xcc2690 00:27:18.010 [2024-11-29 13:11:20.503872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.503876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.503883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.010 [2024-11-29 13:11:20.503893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24880, cid 5, qid 0 00:27:18.010 [2024-11-29 13:11:20.504096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.010 [2024-11-29 13:11:20.504102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.010 [2024-11-29 13:11:20.504105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24880) on tqpair=0xcc2690 00:27:18.010 [2024-11-29 13:11:20.504119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.504129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.010 [2024-11-29 13:11:20.504139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24880, cid 5, qid 0 00:27:18.010 [2024-11-29 13:11:20.504343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.010 [2024-11-29 13:11:20.504350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.010 [2024-11-29 13:11:20.504353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24880) on tqpair=0xcc2690 00:27:18.010 [2024-11-29 13:11:20.504374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.504386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.010 [2024-11-29 13:11:20.504393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.504409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.010 [2024-11-29 13:11:20.504417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.504427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.010 [2024-11-29 13:11:20.504436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcc2690) 00:27:18.010 [2024-11-29 13:11:20.504446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.010 [2024-11-29 13:11:20.504458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24880, cid 5, qid 0 00:27:18.010 [2024-11-29 13:11:20.504463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24700, cid 4, qid 0 00:27:18.010 [2024-11-29 13:11:20.504468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24a00, cid 6, qid 0 00:27:18.010 [2024-11-29 13:11:20.504473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24b80, cid 7, qid 0 00:27:18.010 [2024-11-29 13:11:20.504787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.010 [2024-11-29 13:11:20.504795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.010 [2024-11-29 13:11:20.504798] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504802] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc2690): datao=0, datal=8192, cccid=5 00:27:18.010 [2024-11-29 13:11:20.504807] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd24880) on tqpair(0xcc2690): expected_datao=0, payload_size=8192 00:27:18.010 [2024-11-29 13:11:20.504811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504898] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504902] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.010 [2024-11-29 13:11:20.504914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.010 [2024-11-29 13:11:20.504917] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504921] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc2690): datao=0, datal=512, cccid=4 00:27:18.010 [2024-11-29 13:11:20.504926] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd24700) on tqpair(0xcc2690): expected_datao=0, payload_size=512 00:27:18.010 [2024-11-29 13:11:20.504930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504950] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504954] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.010 [2024-11-29 13:11:20.504965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.010 [2024-11-29 13:11:20.504969] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504972] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc2690): datao=0, datal=512, cccid=6 00:27:18.010 [2024-11-29 13:11:20.504977] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd24a00) on tqpair(0xcc2690): expected_datao=0, payload_size=512 00:27:18.010 [2024-11-29 13:11:20.504981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504990] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504994] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.504999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:18.010 [2024-11-29 13:11:20.505005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:18.010 [2024-11-29 13:11:20.505009] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.505013] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc2690): datao=0, datal=4096, cccid=7 00:27:18.010 [2024-11-29 13:11:20.505017] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd24b80) on tqpair(0xcc2690): expected_datao=0, payload_size=4096 00:27:18.010 [2024-11-29 13:11:20.505022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.505029] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.505032] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.505231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.010 [2024-11-29 13:11:20.505238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.010 [2024-11-29 13:11:20.505241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.010 [2024-11-29 13:11:20.505245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24880) on tqpair=0xcc2690 00:27:18.010 [2024-11-29 13:11:20.505259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.010 [2024-11-29 13:11:20.505265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.011 [2024-11-29 13:11:20.505268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.011 [2024-11-29 13:11:20.505272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24700) on tqpair=0xcc2690 00:27:18.011 [2024-11-29 13:11:20.505283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.011 [2024-11-29 13:11:20.505289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.011 [2024-11-29 13:11:20.505292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.011 [2024-11-29 13:11:20.505296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24a00) on tqpair=0xcc2690 00:27:18.011 [2024-11-29 13:11:20.505303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.011 [2024-11-29 13:11:20.505309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.011 [2024-11-29 13:11:20.505312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.011 [2024-11-29 13:11:20.505316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24b80) on tqpair=0xcc2690 00:27:18.011 ===================================================== 00:27:18.011 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.011 ===================================================== 00:27:18.011 Controller Capabilities/Features 00:27:18.011 ================================ 00:27:18.011 Vendor ID: 8086 00:27:18.011 Subsystem Vendor ID: 8086 00:27:18.011 Serial Number: SPDK00000000000001 00:27:18.011 Model Number: SPDK bdev Controller 00:27:18.011 Firmware Version: 25.01 00:27:18.011 Recommended Arb Burst: 6 00:27:18.011 IEEE OUI Identifier: e4 d2 5c 00:27:18.011 Multi-path I/O 00:27:18.011 May have multiple subsystem ports: Yes 00:27:18.011 May have multiple controllers: Yes 00:27:18.011 Associated with SR-IOV VF: No 00:27:18.011 Max Data Transfer Size: 131072 00:27:18.011 Max Number of Namespaces: 32 00:27:18.011 Max Number of I/O Queues: 127 00:27:18.011 NVMe Specification Version (VS): 1.3 00:27:18.011 NVMe Specification Version (Identify): 1.3 00:27:18.011 Maximum Queue Entries: 128 00:27:18.011 Contiguous Queues Required: Yes 00:27:18.011 Arbitration Mechanisms Supported 00:27:18.011 Weighted Round Robin: Not Supported 00:27:18.011 Vendor Specific: Not Supported 00:27:18.011 Reset Timeout: 15000 ms 00:27:18.011 Doorbell Stride: 4 bytes 00:27:18.011 NVM Subsystem Reset: Not Supported 00:27:18.011 Command Sets Supported 00:27:18.011 NVM Command Set: Supported 00:27:18.011 Boot Partition: Not Supported 00:27:18.011 Memory Page Size Minimum: 4096 bytes 00:27:18.011 Memory Page Size Maximum: 4096 bytes 00:27:18.011 Persistent Memory Region: Not Supported 00:27:18.011 Optional Asynchronous Events Supported 00:27:18.011 Namespace Attribute Notices: Supported 00:27:18.011 Firmware Activation Notices: Not Supported 00:27:18.011 ANA Change Notices: Not Supported 00:27:18.011 PLE Aggregate Log Change Notices: Not Supported 00:27:18.011 LBA Status Info Alert Notices: Not Supported 00:27:18.011 EGE Aggregate Log Change Notices: Not Supported 00:27:18.011 Normal NVM Subsystem Shutdown event: Not Supported 00:27:18.011 Zone Descriptor Change Notices: Not Supported 00:27:18.011 Discovery Log Change Notices: Not Supported 00:27:18.011 Controller Attributes 00:27:18.011 128-bit Host Identifier: Supported 00:27:18.011 Non-Operational Permissive Mode: Not Supported 00:27:18.011 NVM Sets: Not Supported 00:27:18.011 Read Recovery Levels: Not Supported 00:27:18.011 Endurance Groups: Not Supported 00:27:18.011 Predictable Latency Mode: Not Supported 00:27:18.011 Traffic Based Keep ALive: Not Supported 00:27:18.011 Namespace Granularity: Not Supported 00:27:18.011 SQ Associations: Not Supported 00:27:18.011 UUID List: Not Supported 00:27:18.011 Multi-Domain Subsystem: Not Supported 00:27:18.011 Fixed Capacity Management: Not Supported 00:27:18.011 Variable Capacity Management: Not Supported 00:27:18.011 Delete Endurance Group: Not Supported 00:27:18.011 Delete NVM Set: Not Supported 00:27:18.011 Extended LBA Formats Supported: Not Supported 00:27:18.011 Flexible Data Placement Supported: Not Supported 00:27:18.011 00:27:18.011 Controller Memory Buffer Support 00:27:18.011 ================================ 00:27:18.011 Supported: No 00:27:18.011 00:27:18.011 Persistent Memory Region Support 00:27:18.011 ================================ 00:27:18.011 Supported: No 00:27:18.011 00:27:18.011 Admin Command Set Attributes 00:27:18.011 ============================ 00:27:18.011 Security Send/Receive: Not Supported 00:27:18.011 Format NVM: Not Supported 00:27:18.011 Firmware Activate/Download: Not Supported 00:27:18.011 Namespace Management: Not Supported 00:27:18.011 Device Self-Test: Not Supported 00:27:18.011 Directives: Not Supported 00:27:18.011 NVMe-MI: Not Supported 00:27:18.011 Virtualization Management: Not Supported 00:27:18.011 Doorbell Buffer Config: Not Supported 00:27:18.011 Get LBA Status Capability: Not Supported 00:27:18.011 Command & Feature Lockdown Capability: Not Supported 00:27:18.011 Abort Command Limit: 4 00:27:18.011 Async Event Request Limit: 4 00:27:18.011 Number of Firmware Slots: N/A 00:27:18.011 Firmware Slot 1 Read-Only: N/A 00:27:18.011 Firmware Activation Without Reset: N/A 00:27:18.011 Multiple Update Detection Support: N/A 00:27:18.011 Firmware Update Granularity: No Information Provided 00:27:18.011 Per-Namespace SMART Log: No 00:27:18.011 Asymmetric Namespace Access Log Page: Not Supported 00:27:18.011 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:18.011 Command Effects Log Page: Supported 00:27:18.011 Get Log Page Extended Data: Supported 00:27:18.011 Telemetry Log Pages: Not Supported 00:27:18.011 Persistent Event Log Pages: Not Supported 00:27:18.011 Supported Log Pages Log Page: May Support 00:27:18.011 Commands Supported & Effects Log Page: Not Supported 00:27:18.011 Feature Identifiers & Effects Log Page:May Support 00:27:18.011 NVMe-MI Commands & Effects Log Page: May Support 00:27:18.011 Data Area 4 for Telemetry Log: Not Supported 00:27:18.011 Error Log Page Entries Supported: 128 00:27:18.011 Keep Alive: Supported 00:27:18.011 Keep Alive Granularity: 10000 ms 00:27:18.011 00:27:18.011 NVM Command Set Attributes 00:27:18.011 ========================== 00:27:18.011 Submission Queue Entry Size 00:27:18.011 Max: 64 00:27:18.011 Min: 64 00:27:18.011 Completion Queue Entry Size 00:27:18.011 Max: 16 00:27:18.011 Min: 16 00:27:18.011 Number of Namespaces: 32 00:27:18.011 Compare Command: Supported 00:27:18.011 Write Uncorrectable Command: Not Supported 00:27:18.011 Dataset Management Command: Supported 00:27:18.011 Write Zeroes Command: Supported 00:27:18.011 Set Features Save Field: Not Supported 00:27:18.011 Reservations: Supported 00:27:18.011 Timestamp: Not Supported 00:27:18.011 Copy: Supported 00:27:18.011 Volatile Write Cache: Present 00:27:18.011 Atomic Write Unit (Normal): 1 00:27:18.011 Atomic Write Unit (PFail): 1 00:27:18.011 Atomic Compare & Write Unit: 1 00:27:18.011 Fused Compare & Write: Supported 00:27:18.011 Scatter-Gather List 00:27:18.011 SGL Command Set: Supported 00:27:18.011 SGL Keyed: Supported 00:27:18.011 SGL Bit Bucket Descriptor: Not Supported 00:27:18.011 SGL Metadata Pointer: Not Supported 00:27:18.011 Oversized SGL: Not Supported 00:27:18.011 SGL Metadata Address: Not Supported 00:27:18.011 SGL Offset: Supported 00:27:18.011 Transport SGL Data Block: Not Supported 00:27:18.011 Replay Protected Memory Block: Not Supported 00:27:18.011 00:27:18.011 Firmware Slot Information 00:27:18.011 ========================= 00:27:18.011 Active slot: 1 00:27:18.011 Slot 1 Firmware Revision: 25.01 00:27:18.011 00:27:18.011 00:27:18.011 Commands Supported and Effects 00:27:18.011 ============================== 00:27:18.011 Admin Commands 00:27:18.011 -------------- 00:27:18.011 Get Log Page (02h): Supported 00:27:18.011 Identify (06h): Supported 00:27:18.011 Abort (08h): Supported 00:27:18.011 Set Features (09h): Supported 00:27:18.011 Get Features (0Ah): Supported 00:27:18.011 Asynchronous Event Request (0Ch): Supported 00:27:18.011 Keep Alive (18h): Supported 00:27:18.011 I/O Commands 00:27:18.011 ------------ 00:27:18.011 Flush (00h): Supported LBA-Change 00:27:18.011 Write (01h): Supported LBA-Change 00:27:18.011 Read (02h): Supported 00:27:18.011 Compare (05h): Supported 00:27:18.011 Write Zeroes (08h): Supported LBA-Change 00:27:18.011 Dataset Management (09h): Supported LBA-Change 00:27:18.011 Copy (19h): Supported LBA-Change 00:27:18.011 00:27:18.011 Error Log 00:27:18.011 ========= 00:27:18.011 00:27:18.011 Arbitration 00:27:18.011 =========== 00:27:18.011 Arbitration Burst: 1 00:27:18.011 00:27:18.011 Power Management 00:27:18.011 ================ 00:27:18.011 Number of Power States: 1 00:27:18.011 Current Power State: Power State #0 00:27:18.011 Power State #0: 00:27:18.011 Max Power: 0.00 W 00:27:18.011 Non-Operational State: Operational 00:27:18.011 Entry Latency: Not Reported 00:27:18.011 Exit Latency: Not Reported 00:27:18.011 Relative Read Throughput: 0 00:27:18.011 Relative Read Latency: 0 00:27:18.011 Relative Write Throughput: 0 00:27:18.011 Relative Write Latency: 0 00:27:18.011 Idle Power: Not Reported 00:27:18.011 Active Power: Not Reported 00:27:18.011 Non-Operational Permissive Mode: Not Supported 00:27:18.011 00:27:18.011 Health Information 00:27:18.011 ================== 00:27:18.011 Critical Warnings: 00:27:18.011 Available Spare Space: OK 00:27:18.011 Temperature: OK 00:27:18.011 Device Reliability: OK 00:27:18.011 Read Only: No 00:27:18.011 Volatile Memory Backup: OK 00:27:18.011 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:18.011 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:18.011 Available Spare: 0% 00:27:18.011 Available Spare Threshold: 0% 00:27:18.011 Life Percentage Used:[2024-11-29 13:11:20.505420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.011 [2024-11-29 13:11:20.505425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcc2690) 00:27:18.011 [2024-11-29 13:11:20.505433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.011 [2024-11-29 13:11:20.505445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24b80, cid 7, qid 0 00:27:18.011 [2024-11-29 13:11:20.505652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.011 [2024-11-29 13:11:20.505659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.011 [2024-11-29 13:11:20.505662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.011 [2024-11-29 13:11:20.505666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24b80) on tqpair=0xcc2690 00:27:18.011 [2024-11-29 13:11:20.505703] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:27:18.011 [2024-11-29 13:11:20.505713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24100) on tqpair=0xcc2690 00:27:18.011 [2024-11-29 13:11:20.505720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.011 [2024-11-29 13:11:20.505725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24280) on tqpair=0xcc2690 00:27:18.011 [2024-11-29 13:11:20.505734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.011 [2024-11-29 13:11:20.505740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24400) on tqpair=0xcc2690 00:27:18.011 [2024-11-29 13:11:20.505744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.012 [2024-11-29 13:11:20.505749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24580) on tqpair=0xcc2690 00:27:18.012 [2024-11-29 13:11:20.505754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.012 [2024-11-29 13:11:20.505762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.505766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.505770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc2690) 00:27:18.012 [2024-11-29 13:11:20.505777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.012 [2024-11-29 13:11:20.505790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24580, cid 3, qid 0 00:27:18.012 [2024-11-29 13:11:20.505965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.012 [2024-11-29 13:11:20.505971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.012 [2024-11-29 13:11:20.505975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.505979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24580) on tqpair=0xcc2690 00:27:18.012 [2024-11-29 13:11:20.505986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.505990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.505994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc2690) 00:27:18.012 [2024-11-29 13:11:20.506001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.012 [2024-11-29 13:11:20.506015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24580, cid 3, qid 0 00:27:18.012 [2024-11-29 13:11:20.506239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.012 [2024-11-29 13:11:20.506246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.012 [2024-11-29 13:11:20.506250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.506254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24580) on tqpair=0xcc2690 00:27:18.012 [2024-11-29 13:11:20.506259] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:27:18.012 [2024-11-29 13:11:20.506263] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:27:18.012 [2024-11-29 13:11:20.506273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.506277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.506281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc2690) 00:27:18.012 [2024-11-29 13:11:20.506287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.012 [2024-11-29 13:11:20.506298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24580, cid 3, qid 0 00:27:18.012 [2024-11-29 13:11:20.506507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.012 [2024-11-29 13:11:20.506516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.012 [2024-11-29 13:11:20.506519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.506523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24580) on tqpair=0xcc2690 00:27:18.012 [2024-11-29 13:11:20.506534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.506545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.506550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc2690) 00:27:18.012 [2024-11-29 13:11:20.506558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.012 [2024-11-29 13:11:20.506569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24580, cid 3, qid 0 00:27:18.012 [2024-11-29 13:11:20.506789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.012 [2024-11-29 13:11:20.506798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.012 [2024-11-29 13:11:20.506801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.506805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24580) on tqpair=0xcc2690 00:27:18.012 [2024-11-29 13:11:20.506816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.506820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.506824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc2690) 00:27:18.012 [2024-11-29 13:11:20.506830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.012 [2024-11-29 13:11:20.506841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24580, cid 3, qid 0 00:27:18.012 [2024-11-29 13:11:20.507058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.012 [2024-11-29 13:11:20.507066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.012 [2024-11-29 13:11:20.507069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.507076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24580) on tqpair=0xcc2690 00:27:18.012 [2024-11-29 13:11:20.507086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.507090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.507094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc2690) 00:27:18.012 [2024-11-29 13:11:20.507102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.012 [2024-11-29 13:11:20.507112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd24580, cid 3, qid 0 00:27:18.012 [2024-11-29 13:11:20.511172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:18.012 [2024-11-29 13:11:20.511181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:18.012 [2024-11-29 13:11:20.511185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:18.012 [2024-11-29 13:11:20.511189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd24580) on tqpair=0xcc2690 00:27:18.012 [2024-11-29 13:11:20.511197] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:27:18.012 0% 00:27:18.012 Data Units Read: 0 00:27:18.012 Data Units Written: 0 00:27:18.012 Host Read Commands: 0 00:27:18.012 Host Write Commands: 0 00:27:18.012 Controller Busy Time: 0 minutes 00:27:18.012 Power Cycles: 0 00:27:18.012 Power On Hours: 0 hours 00:27:18.012 Unsafe Shutdowns: 0 00:27:18.012 Unrecoverable Media Errors: 0 00:27:18.012 Lifetime Error Log Entries: 0 00:27:18.012 Warning Temperature Time: 0 minutes 00:27:18.012 Critical Temperature Time: 0 minutes 00:27:18.012 00:27:18.012 Number of Queues 00:27:18.012 ================ 00:27:18.012 Number of I/O Submission Queues: 127 00:27:18.012 Number of I/O Completion Queues: 127 00:27:18.012 00:27:18.012 Active Namespaces 00:27:18.012 ================= 00:27:18.012 Namespace ID:1 00:27:18.012 Error Recovery Timeout: Unlimited 00:27:18.012 Command Set Identifier: NVM (00h) 00:27:18.012 Deallocate: Supported 00:27:18.012 Deallocated/Unwritten Error: Not Supported 00:27:18.012 Deallocated Read Value: Unknown 00:27:18.012 Deallocate in Write Zeroes: Not Supported 00:27:18.012 Deallocated Guard Field: 0xFFFF 00:27:18.012 Flush: Supported 00:27:18.012 Reservation: Supported 00:27:18.012 Namespace Sharing Capabilities: Multiple Controllers 00:27:18.012 Size (in LBAs): 131072 (0GiB) 00:27:18.012 Capacity (in LBAs): 131072 (0GiB) 00:27:18.012 Utilization (in LBAs): 131072 (0GiB) 00:27:18.012 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:18.012 EUI64: ABCDEF0123456789 00:27:18.012 UUID: 17a02483-d03c-4fee-94e5-8dfe5e750bc6 00:27:18.012 Thin Provisioning: Not Supported 00:27:18.012 Per-NS Atomic Units: Yes 00:27:18.012 Atomic Boundary Size (Normal): 0 00:27:18.012 Atomic Boundary Size (PFail): 0 00:27:18.012 Atomic Boundary Offset: 0 00:27:18.012 Maximum Single Source Range Length: 65535 00:27:18.012 Maximum Copy Length: 65535 00:27:18.012 Maximum Source Range Count: 1 00:27:18.012 NGUID/EUI64 Never Reused: No 00:27:18.012 Namespace Write Protected: No 00:27:18.012 Number of LBA Formats: 1 00:27:18.012 Current LBA Format: LBA Format #00 00:27:18.012 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:18.012 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.012 rmmod nvme_tcp 00:27:18.012 rmmod nvme_fabrics 00:27:18.012 rmmod nvme_keyring 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1019997 ']' 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1019997 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1019997 ']' 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1019997 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.012 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019997 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019997' 00:27:18.274 killing process with pid 1019997 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1019997 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1019997 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.274 13:11:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.830 13:11:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:20.830 00:27:20.830 real 0m11.860s 00:27:20.830 user 0m9.309s 00:27:20.830 sys 0m6.145s 00:27:20.830 13:11:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.830 13:11:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:20.830 ************************************ 00:27:20.830 END TEST nvmf_identify 00:27:20.830 ************************************ 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.830 ************************************ 00:27:20.830 START TEST nvmf_perf 00:27:20.830 ************************************ 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:20.830 * Looking for test storage... 00:27:20.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:20.830 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:20.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.831 --rc genhtml_branch_coverage=1 00:27:20.831 --rc genhtml_function_coverage=1 00:27:20.831 --rc genhtml_legend=1 00:27:20.831 --rc geninfo_all_blocks=1 00:27:20.831 --rc geninfo_unexecuted_blocks=1 00:27:20.831 00:27:20.831 ' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:20.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.831 --rc genhtml_branch_coverage=1 00:27:20.831 --rc genhtml_function_coverage=1 00:27:20.831 --rc genhtml_legend=1 00:27:20.831 --rc geninfo_all_blocks=1 00:27:20.831 --rc geninfo_unexecuted_blocks=1 00:27:20.831 00:27:20.831 ' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:20.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.831 --rc genhtml_branch_coverage=1 00:27:20.831 --rc genhtml_function_coverage=1 00:27:20.831 --rc genhtml_legend=1 00:27:20.831 --rc geninfo_all_blocks=1 00:27:20.831 --rc geninfo_unexecuted_blocks=1 00:27:20.831 00:27:20.831 ' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:20.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.831 --rc genhtml_branch_coverage=1 00:27:20.831 --rc genhtml_function_coverage=1 00:27:20.831 --rc genhtml_legend=1 00:27:20.831 --rc geninfo_all_blocks=1 00:27:20.831 --rc geninfo_unexecuted_blocks=1 00:27:20.831 00:27:20.831 ' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:20.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.831 13:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.973 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:28.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:28.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:28.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:28.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:28.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:27:28.974 00:27:28.974 --- 10.0.0.2 ping statistics --- 00:27:28.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.974 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:27:28.974 00:27:28.974 --- 10.0.0.1 ping statistics --- 00:27:28.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.974 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1024426 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1024426 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1024426 ']' 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.974 13:11:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:28.974 [2024-11-29 13:11:30.927597] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:27:28.974 [2024-11-29 13:11:30.927661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.974 [2024-11-29 13:11:31.027828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.974 [2024-11-29 13:11:31.081845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.974 [2024-11-29 13:11:31.081898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.974 [2024-11-29 13:11:31.081906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.974 [2024-11-29 13:11:31.081913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.974 [2024-11-29 13:11:31.081920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.974 [2024-11-29 13:11:31.084265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.974 [2024-11-29 13:11:31.084598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.974 [2024-11-29 13:11:31.084730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.974 [2024-11-29 13:11:31.084734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.236 13:11:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.236 13:11:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:27:29.236 13:11:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:29.236 13:11:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:29.236 13:11:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:29.236 13:11:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.236 13:11:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:29.236 13:11:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:29.808 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:29.808 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:30.069 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:27:30.069 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:30.330 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:30.330 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:27:30.330 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:30.330 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:30.330 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:30.330 [2024-11-29 13:11:32.914200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.330 13:11:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.591 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:30.591 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.852 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:30.852 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:31.114 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.114 [2024-11-29 13:11:33.714042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.114 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:31.374 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:27:31.374 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:31.374 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:31.374 13:11:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:32.756 Initializing NVMe Controllers 00:27:32.756 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:27:32.756 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:27:32.756 Initialization complete. Launching workers. 00:27:32.756 ======================================================== 00:27:32.756 Latency(us) 00:27:32.756 Device Information : IOPS MiB/s Average min max 00:27:32.756 PCIE (0000:65:00.0) NSID 1 from core 0: 78844.40 307.99 405.06 13.33 4983.74 00:27:32.756 ======================================================== 00:27:32.756 Total : 78844.40 307.99 405.06 13.33 4983.74 00:27:32.756 00:27:32.756 13:11:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:34.140 Initializing NVMe Controllers 00:27:34.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:34.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:34.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:34.140 Initialization complete. Launching workers. 00:27:34.140 ======================================================== 00:27:34.140 Latency(us) 00:27:34.140 Device Information : IOPS MiB/s Average min max 00:27:34.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 117.00 0.46 8886.96 214.38 45598.45 00:27:34.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.00 0.23 17337.81 7956.08 55865.84 00:27:34.140 ======================================================== 00:27:34.140 Total : 175.00 0.68 11687.81 214.38 55865.84 00:27:34.140 00:27:34.140 13:11:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:35.526 Initializing NVMe Controllers 00:27:35.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:35.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:35.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:35.526 Initialization complete. Launching workers. 00:27:35.526 ======================================================== 00:27:35.526 Latency(us) 00:27:35.526 Device Information : IOPS MiB/s Average min max 00:27:35.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11565.99 45.18 2769.36 444.56 8334.57 00:27:35.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3805.00 14.86 8456.33 5439.13 16161.84 00:27:35.527 ======================================================== 00:27:35.527 Total : 15370.98 60.04 4177.13 444.56 16161.84 00:27:35.527 00:27:35.527 13:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:35.527 13:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:35.527 13:11:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:38.073 Initializing NVMe Controllers 00:27:38.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.073 Controller IO queue size 128, less than required. 00:27:38.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.074 Controller IO queue size 128, less than required. 00:27:38.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:38.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:38.074 Initialization complete. Launching workers. 00:27:38.074 ======================================================== 00:27:38.074 Latency(us) 00:27:38.074 Device Information : IOPS MiB/s Average min max 00:27:38.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1914.97 478.74 67337.00 40107.52 118257.24 00:27:38.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.49 150.87 221863.92 55790.86 323158.75 00:27:38.074 ======================================================== 00:27:38.074 Total : 2518.46 629.61 104365.79 40107.52 323158.75 00:27:38.074 00:27:38.335 13:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:38.335 No valid NVMe controllers or AIO or URING devices found 00:27:38.335 Initializing NVMe Controllers 00:27:38.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:38.335 Controller IO queue size 128, less than required. 00:27:38.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.335 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:38.335 Controller IO queue size 128, less than required. 00:27:38.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:38.335 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:38.335 WARNING: Some requested NVMe devices were skipped 00:27:38.335 13:11:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:40.880 Initializing NVMe Controllers 00:27:40.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:40.880 Controller IO queue size 128, less than required. 00:27:40.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.880 Controller IO queue size 128, less than required. 00:27:40.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:40.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:40.880 Initialization complete. Launching workers. 00:27:40.880 00:27:40.880 ==================== 00:27:40.880 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:40.880 TCP transport: 00:27:40.880 polls: 34571 00:27:40.880 idle_polls: 20248 00:27:40.880 sock_completions: 14323 00:27:40.880 nvme_completions: 7381 00:27:40.880 submitted_requests: 11026 00:27:40.880 queued_requests: 1 00:27:40.880 00:27:40.880 ==================== 00:27:40.880 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:40.880 TCP transport: 00:27:40.880 polls: 39864 00:27:40.880 idle_polls: 27078 00:27:40.880 sock_completions: 12786 00:27:40.880 nvme_completions: 7293 00:27:40.880 submitted_requests: 11058 00:27:40.880 queued_requests: 1 00:27:40.880 ======================================================== 00:27:40.880 Latency(us) 00:27:40.880 Device Information : IOPS MiB/s Average min max 00:27:40.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1842.78 460.69 70482.41 41460.24 126644.82 00:27:40.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1820.80 455.20 70727.06 24895.47 121647.00 00:27:40.880 ======================================================== 00:27:40.880 Total : 3663.58 915.89 70604.00 24895.47 126644.82 00:27:40.880 00:27:40.880 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:40.880 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.141 rmmod nvme_tcp 00:27:41.141 rmmod nvme_fabrics 00:27:41.141 rmmod nvme_keyring 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1024426 ']' 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1024426 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1024426 ']' 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1024426 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024426 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024426' 00:27:41.141 killing process with pid 1024426 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1024426 00:27:41.141 13:11:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1024426 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.246 13:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.162 13:11:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:45.162 00:27:45.162 real 0m24.779s 00:27:45.162 user 1m0.222s 00:27:45.162 sys 0m8.660s 00:27:45.162 13:11:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.162 13:11:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.162 ************************************ 00:27:45.162 END TEST nvmf_perf 00:27:45.162 ************************************ 00:27:45.423 13:11:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:45.423 13:11:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.423 13:11:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.423 13:11:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.423 ************************************ 00:27:45.423 START TEST nvmf_fio_host 00:27:45.423 ************************************ 00:27:45.423 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:45.423 * Looking for test storage... 00:27:45.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.423 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.424 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.686 --rc genhtml_branch_coverage=1 00:27:45.686 --rc genhtml_function_coverage=1 00:27:45.686 --rc genhtml_legend=1 00:27:45.686 --rc geninfo_all_blocks=1 00:27:45.686 --rc geninfo_unexecuted_blocks=1 00:27:45.686 00:27:45.686 ' 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.686 --rc genhtml_branch_coverage=1 00:27:45.686 --rc genhtml_function_coverage=1 00:27:45.686 --rc genhtml_legend=1 00:27:45.686 --rc geninfo_all_blocks=1 00:27:45.686 --rc geninfo_unexecuted_blocks=1 00:27:45.686 00:27:45.686 ' 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.686 --rc genhtml_branch_coverage=1 00:27:45.686 --rc genhtml_function_coverage=1 00:27:45.686 --rc genhtml_legend=1 00:27:45.686 --rc geninfo_all_blocks=1 00:27:45.686 --rc geninfo_unexecuted_blocks=1 00:27:45.686 00:27:45.686 ' 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.686 --rc genhtml_branch_coverage=1 00:27:45.686 --rc genhtml_function_coverage=1 00:27:45.686 --rc genhtml_legend=1 00:27:45.686 --rc geninfo_all_blocks=1 00:27:45.686 --rc geninfo_unexecuted_blocks=1 00:27:45.686 00:27:45.686 ' 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.686 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.687 13:11:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:53.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:53.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:53.831 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.831 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:53.832 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:53.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:27:53.832 00:27:53.832 --- 10.0.0.2 ping statistics --- 00:27:53.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.832 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:27:53.832 00:27:53.832 --- 10.0.0.1 ping statistics --- 00:27:53.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.832 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1031502 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1031502 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1031502 ']' 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.832 13:11:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.832 [2024-11-29 13:11:55.784877] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:27:53.832 [2024-11-29 13:11:55.784944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.832 [2024-11-29 13:11:55.885078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.832 [2024-11-29 13:11:55.938411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.832 [2024-11-29 13:11:55.938462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.832 [2024-11-29 13:11:55.938471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.832 [2024-11-29 13:11:55.938480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.832 [2024-11-29 13:11:55.938488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.832 [2024-11-29 13:11:55.940521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.832 [2024-11-29 13:11:55.940683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.832 [2024-11-29 13:11:55.940841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.832 [2024-11-29 13:11:55.940841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.094 13:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.094 13:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:27:54.094 13:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:54.356 [2024-11-29 13:11:56.778861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.356 13:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:54.356 13:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.356 13:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.356 13:11:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:54.617 Malloc1 00:27:54.617 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.617 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:54.877 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.138 [2024-11-29 13:11:57.627755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.138 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:55.400 13:11:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:55.661 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:55.661 fio-3.35 00:27:55.661 Starting 1 thread 00:27:58.235 00:27:58.235 test: (groupid=0, jobs=1): err= 0: pid=1032347: Fri Nov 29 13:12:00 2024 00:27:58.235 read: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec) 00:27:58.235 slat (usec): min=2, max=246, avg= 2.15, stdev= 2.24 00:27:58.235 clat (usec): min=3031, max=9646, avg=5925.95, stdev=1192.54 00:27:58.235 lat (usec): min=3065, max=9648, avg=5928.11, stdev=1192.54 00:27:58.235 clat percentiles (usec): 00:27:58.235 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5080], 00:27:58.235 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5604], 00:27:58.235 | 70.00th=[ 5800], 80.00th=[ 7308], 90.00th=[ 8094], 95.00th=[ 8455], 00:27:58.235 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9372], 99.95th=[ 9503], 00:27:58.235 | 99.99th=[ 9634] 00:27:58.235 bw ( KiB/s): min=33808, max=54144, per=99.95%, avg=47524.00, stdev=9463.08, samples=4 00:27:58.235 iops : min= 8452, max=13536, avg=11881.00, stdev=2365.77, samples=4 00:27:58.235 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec); 0 zone resets 00:27:58.235 slat (usec): min=2, max=223, avg= 2.24, stdev= 1.59 00:27:58.235 clat (usec): min=2375, max=8418, avg=4784.52, stdev=959.48 00:27:58.235 lat (usec): min=2390, max=8420, avg=4786.76, stdev=959.53 00:27:58.235 clat percentiles (usec): 00:27:58.235 | 1.00th=[ 3621], 5.00th=[ 3851], 10.00th=[ 3982], 20.00th=[ 4113], 00:27:58.235 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4490], 00:27:58.235 | 70.00th=[ 4686], 80.00th=[ 5932], 90.00th=[ 6521], 95.00th=[ 6783], 00:27:58.235 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 7701], 99.95th=[ 7832], 00:27:58.235 | 99.99th=[ 8094] 00:27:58.235 bw ( KiB/s): min=34824, max=53552, per=100.00%, avg=47330.00, stdev=8774.75, samples=4 00:27:58.235 iops : min= 8706, max=13388, avg=11832.50, stdev=2193.69, samples=4 00:27:58.235 lat (msec) : 4=5.81%, 10=94.19% 00:27:58.235 cpu : usr=70.51%, sys=27.89%, ctx=66, majf=0, minf=16 00:27:58.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:58.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:58.235 issued rwts: total=23833,23721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.235 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:58.235 00:27:58.235 Run status group 0 (all jobs): 00:27:58.235 READ: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.6MB), run=2005-2005msec 00:27:58.235 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:58.235 13:12:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:58.494 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:58.494 fio-3.35 00:27:58.494 Starting 1 thread 00:28:01.042 [2024-11-29 13:12:03.243915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70e580 is same with the state(6) to be set 00:28:01.042 00:28:01.042 test: (groupid=0, jobs=1): err= 0: pid=1032923: Fri Nov 29 13:12:03 2024 00:28:01.042 read: IOPS=9414, BW=147MiB/s (154MB/s)(295MiB/2004msec) 00:28:01.042 slat (usec): min=3, max=111, avg= 3.60, stdev= 1.61 00:28:01.042 clat (usec): min=1186, max=52066, avg=8295.14, stdev=3410.09 00:28:01.042 lat (usec): min=1189, max=52069, avg=8298.74, stdev=3410.16 00:28:01.042 clat percentiles (usec): 00:28:01.042 | 1.00th=[ 4228], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6325], 00:28:01.042 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8586], 00:28:01.042 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10814], 95.00th=[11469], 00:28:01.042 | 99.00th=[13173], 99.50th=[15008], 99.90th=[50070], 99.95th=[51119], 00:28:01.042 | 99.99th=[51643] 00:28:01.042 bw ( KiB/s): min=65344, max=79872, per=49.81%, avg=75024.00, stdev=6770.83, samples=4 00:28:01.042 iops : min= 4084, max= 4992, avg=4689.00, stdev=423.18, samples=4 00:28:01.042 write: IOPS=5340, BW=83.4MiB/s (87.5MB/s)(153MiB/1837msec); 0 zone resets 00:28:01.042 slat (usec): min=39, max=453, avg=40.91, stdev= 8.01 00:28:01.042 clat (usec): min=2263, max=52685, avg=9258.93, stdev=2833.17 00:28:01.042 lat (usec): min=2303, max=52725, avg=9299.84, stdev=2833.95 00:28:01.042 clat percentiles (usec): 00:28:01.042 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 7898], 00:28:01.042 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:28:01.042 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:28:01.042 | 99.00th=[13042], 99.50th=[14484], 99.90th=[51643], 99.95th=[52691], 00:28:01.042 | 99.99th=[52691] 00:28:01.042 bw ( KiB/s): min=68544, max=82080, per=91.19%, avg=77928.00, stdev=6401.83, samples=4 00:28:01.042 iops : min= 4284, max= 5130, avg=4870.50, stdev=400.11, samples=4 00:28:01.042 lat (msec) : 2=0.02%, 4=0.46%, 10=77.72%, 20=21.35%, 50=0.30% 00:28:01.042 lat (msec) : 100=0.15% 00:28:01.042 cpu : usr=85.12%, sys=13.63%, ctx=11, majf=0, minf=26 00:28:01.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:01.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:01.042 issued rwts: total=18867,9811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:01.042 00:28:01.042 Run status group 0 (all jobs): 00:28:01.042 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=295MiB (309MB), run=2004-2004msec 00:28:01.042 WRITE: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=153MiB (161MB), run=1837-1837msec 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:01.042 rmmod nvme_tcp 00:28:01.042 rmmod nvme_fabrics 00:28:01.042 rmmod nvme_keyring 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1031502 ']' 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1031502 00:28:01.042 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1031502 ']' 00:28:01.043 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1031502 00:28:01.043 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:28:01.043 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.043 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1031502 00:28:01.043 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:01.043 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:01.043 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1031502' 00:28:01.043 killing process with pid 1031502 00:28:01.043 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1031502 00:28:01.043 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1031502 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.304 13:12:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.214 13:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:03.214 00:28:03.214 real 0m17.935s 00:28:03.214 user 1m1.554s 00:28:03.214 sys 0m7.980s 00:28:03.214 13:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.214 13:12:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.214 ************************************ 00:28:03.214 END TEST nvmf_fio_host 00:28:03.214 ************************************ 00:28:03.214 13:12:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:03.214 13:12:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.214 13:12:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.214 13:12:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.475 ************************************ 00:28:03.475 START TEST nvmf_failover 00:28:03.475 ************************************ 00:28:03.475 13:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:03.475 * Looking for test storage... 00:28:03.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.475 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:03.475 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:28:03.475 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:03.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.476 --rc genhtml_branch_coverage=1 00:28:03.476 --rc genhtml_function_coverage=1 00:28:03.476 --rc genhtml_legend=1 00:28:03.476 --rc geninfo_all_blocks=1 00:28:03.476 --rc geninfo_unexecuted_blocks=1 00:28:03.476 00:28:03.476 ' 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:03.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.476 --rc genhtml_branch_coverage=1 00:28:03.476 --rc genhtml_function_coverage=1 00:28:03.476 --rc genhtml_legend=1 00:28:03.476 --rc geninfo_all_blocks=1 00:28:03.476 --rc geninfo_unexecuted_blocks=1 00:28:03.476 00:28:03.476 ' 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:03.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.476 --rc genhtml_branch_coverage=1 00:28:03.476 --rc genhtml_function_coverage=1 00:28:03.476 --rc genhtml_legend=1 00:28:03.476 --rc geninfo_all_blocks=1 00:28:03.476 --rc geninfo_unexecuted_blocks=1 00:28:03.476 00:28:03.476 ' 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:03.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.476 --rc genhtml_branch_coverage=1 00:28:03.476 --rc genhtml_function_coverage=1 00:28:03.476 --rc genhtml_legend=1 00:28:03.476 --rc geninfo_all_blocks=1 00:28:03.476 --rc geninfo_unexecuted_blocks=1 00:28:03.476 00:28:03.476 ' 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.476 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.477 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:03.738 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:03.739 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.739 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:11.878 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:11.878 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:11.878 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:11.878 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.878 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:11.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:28:11.879 00:28:11.879 --- 10.0.0.2 ping statistics --- 00:28:11.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.879 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:28:11.879 00:28:11.879 --- 10.0.0.1 ping statistics --- 00:28:11.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.879 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1038110 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1038110 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1038110 ']' 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.879 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:11.879 [2024-11-29 13:12:13.791046] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:28:11.879 [2024-11-29 13:12:13.791110] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.879 [2024-11-29 13:12:13.892073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:11.879 [2024-11-29 13:12:13.944546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.879 [2024-11-29 13:12:13.944595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.879 [2024-11-29 13:12:13.944603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.879 [2024-11-29 13:12:13.944611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.879 [2024-11-29 13:12:13.944617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.879 [2024-11-29 13:12:13.946492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.879 [2024-11-29 13:12:13.946751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.879 [2024-11-29 13:12:13.946753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.140 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.141 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:12.141 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.141 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.141 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:12.141 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.141 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:12.141 [2024-11-29 13:12:14.808567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.402 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:12.402 Malloc0 00:28:12.402 13:12:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.662 13:12:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:12.924 13:12:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.185 [2024-11-29 13:12:15.620610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.185 13:12:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:13.185 [2024-11-29 13:12:15.805196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:13.185 13:12:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:13.446 [2024-11-29 13:12:16.001927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1038681 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1038681 /var/tmp/bdevperf.sock 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1038681 ']' 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:13.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.446 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:14.385 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.385 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:14.385 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:14.643 NVMe0n1 00:28:14.643 13:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:14.902 00:28:14.902 13:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1038836 00:28:14.902 13:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:14.902 13:12:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:28:15.839 13:12:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.099 [2024-11-29 13:12:18.550046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.099 [2024-11-29 13:12:18.550150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102ed0 is same with the state(6) to be set 00:28:16.100 13:12:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:28:19.397 13:12:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:19.397 00:28:19.397 13:12:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:19.658 [2024-11-29 13:12:22.174898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.174996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.658 [2024-11-29 13:12:22.175094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 [2024-11-29 13:12:22.175306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1103cf0 is same with the state(6) to be set 00:28:19.659 13:12:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:28:22.955 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.955 [2024-11-29 13:12:25.367451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.955 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:28:23.896 13:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:24.158 13:12:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1038836 00:28:30.746 { 00:28:30.746 "results": [ 00:28:30.746 { 00:28:30.746 "job": "NVMe0n1", 00:28:30.746 "core_mask": "0x1", 00:28:30.746 "workload": "verify", 00:28:30.746 "status": "finished", 00:28:30.746 "verify_range": { 00:28:30.746 "start": 0, 00:28:30.746 "length": 16384 00:28:30.746 }, 00:28:30.746 "queue_depth": 128, 00:28:30.746 "io_size": 4096, 00:28:30.746 "runtime": 15.008259, 00:28:30.746 "iops": 12242.725821829168, 00:28:30.746 "mibps": 47.82314774152019, 00:28:30.746 "io_failed": 6325, 00:28:30.746 "io_timeout": 0, 00:28:30.746 "avg_latency_us": 10085.587904054886, 00:28:30.746 "min_latency_us": 549.5466666666666, 00:28:30.746 "max_latency_us": 17585.493333333332 00:28:30.746 } 00:28:30.746 ], 00:28:30.746 "core_count": 1 00:28:30.746 } 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1038681 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1038681 ']' 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1038681 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1038681 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1038681' 00:28:30.746 killing process with pid 1038681 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1038681 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1038681 00:28:30.746 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:30.746 [2024-11-29 13:12:16.075221] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:28:30.746 [2024-11-29 13:12:16.075302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1038681 ] 00:28:30.746 [2024-11-29 13:12:16.168417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.747 [2024-11-29 13:12:16.221840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.747 Running I/O for 15 seconds... 00:28:30.747 11241.00 IOPS, 43.91 MiB/s [2024-11-29T12:12:33.427Z] [2024-11-29 13:12:18.551099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.747 [2024-11-29 13:12:18.551706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.747 [2024-11-29 13:12:18.551724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.747 [2024-11-29 13:12:18.551741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.747 [2024-11-29 13:12:18.551760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.747 [2024-11-29 13:12:18.551769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.747 [2024-11-29 13:12:18.551777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.551989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.551996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.748 [2024-11-29 13:12:18.552362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.748 [2024-11-29 13:12:18.552371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.749 [2024-11-29 13:12:18.552546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.749 [2024-11-29 13:12:18.552562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.749 [2024-11-29 13:12:18.552579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.749 [2024-11-29 13:12:18.552597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.749 [2024-11-29 13:12:18.552613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.749 [2024-11-29 13:12:18.552629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.552985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.552992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.553001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.553009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.749 [2024-11-29 13:12:18.553019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.749 [2024-11-29 13:12:18.553027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.750 [2024-11-29 13:12:18.553043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.750 [2024-11-29 13:12:18.553059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.750 [2024-11-29 13:12:18.553075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.750 [2024-11-29 13:12:18.553091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.750 [2024-11-29 13:12:18.553108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.750 [2024-11-29 13:12:18.553124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.750 [2024-11-29 13:12:18.553140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.750 [2024-11-29 13:12:18.553156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.750 [2024-11-29 13:12:18.553192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:28:30.750 [2024-11-29 13:12:18.553199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.750 [2024-11-29 13:12:18.553215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.750 [2024-11-29 13:12:18.553221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:28:30.750 [2024-11-29 13:12:18.553228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.750 [2024-11-29 13:12:18.553243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.750 [2024-11-29 13:12:18.553249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:28:30.750 [2024-11-29 13:12:18.553256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.750 [2024-11-29 13:12:18.553270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.750 [2024-11-29 13:12:18.553277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:28:30.750 [2024-11-29 13:12:18.553284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.750 [2024-11-29 13:12:18.553298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.750 [2024-11-29 13:12:18.553304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:28:30.750 [2024-11-29 13:12:18.553311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.750 [2024-11-29 13:12:18.553324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.750 [2024-11-29 13:12:18.553331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:28:30.750 [2024-11-29 13:12:18.553338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.750 [2024-11-29 13:12:18.553352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.750 [2024-11-29 13:12:18.553358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:28:30.750 [2024-11-29 13:12:18.553365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553403] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:30.750 [2024-11-29 13:12:18.553424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.750 [2024-11-29 13:12:18.553432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.750 [2024-11-29 13:12:18.553448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.750 [2024-11-29 13:12:18.553464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.750 [2024-11-29 13:12:18.553479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:18.553486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:30.750 [2024-11-29 13:12:18.557076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:30.750 [2024-11-29 13:12:18.557101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b9da0 (9): Bad file descriptor 00:28:30.750 [2024-11-29 13:12:18.587266] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:30.750 10917.50 IOPS, 42.65 MiB/s [2024-11-29T12:12:33.430Z] 10950.33 IOPS, 42.77 MiB/s [2024-11-29T12:12:33.430Z] 11096.75 IOPS, 43.35 MiB/s [2024-11-29T12:12:33.430Z] [2024-11-29 13:12:22.175890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.175917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.175929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.175935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.175942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.175947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.175954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.175959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.175966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.175971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.175977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.175982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.175989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.175994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.176000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.176006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.176012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.176018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.176024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.176029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.176036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.176040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.176051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.176056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.176062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.750 [2024-11-29 13:12:22.176067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.750 [2024-11-29 13:12:22.176074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.751 [2024-11-29 13:12:22.176453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.751 [2024-11-29 13:12:22.176460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.751 [2024-11-29 13:12:22.176466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.752 [2024-11-29 13:12:22.176570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.752 [2024-11-29 13:12:22.176907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-29 13:12:22.176912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.176918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.176923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.176931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-11-29 13:12:22.176935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.176942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-11-29 13:12:22.176947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.176953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-11-29 13:12:22.176958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.176965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-11-29 13:12:22.176970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.176976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-11-29 13:12:22.176981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.176987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-11-29 13:12:22.176992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.176998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.753 [2024-11-29 13:12:22.177003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.753 [2024-11-29 13:12:22.177373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.753 [2024-11-29 13:12:22.177378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:22.177395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.754 [2024-11-29 13:12:22.177401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41944 len:8 PRP1 0x0 PRP2 0x0 00:28:30.754 [2024-11-29 13:12:22.177406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:22.177413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.754 [2024-11-29 13:12:22.177417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.754 [2024-11-29 13:12:22.177423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41952 len:8 PRP1 0x0 PRP2 0x0 00:28:30.754 [2024-11-29 13:12:22.177428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:22.177460] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:30.754 [2024-11-29 13:12:22.177476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.754 [2024-11-29 13:12:22.177482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:22.177488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.754 [2024-11-29 13:12:22.177493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:22.177498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.754 [2024-11-29 13:12:22.177503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:22.177509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.754 [2024-11-29 13:12:22.177514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:22.177519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:30.754 [2024-11-29 13:12:22.177537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b9da0 (9): Bad file descriptor 00:28:30.754 [2024-11-29 13:12:22.179984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:30.754 [2024-11-29 13:12:22.246729] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:28:30.754 11236.00 IOPS, 43.89 MiB/s [2024-11-29T12:12:33.434Z] 11500.17 IOPS, 44.92 MiB/s [2024-11-29T12:12:33.434Z] 11698.29 IOPS, 45.70 MiB/s [2024-11-29T12:12:33.434Z] 11828.38 IOPS, 46.20 MiB/s [2024-11-29T12:12:33.434Z] 11931.11 IOPS, 46.61 MiB/s [2024-11-29T12:12:33.434Z] [2024-11-29 13:12:26.563379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.754 [2024-11-29 13:12:26.563412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.754 [2024-11-29 13:12:26.563436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.754 [2024-11-29 13:12:26.563757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.754 [2024-11-29 13:12:26.563763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.563992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.563998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.564003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.564014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.564025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.564038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.564049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.564060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.564071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.755 [2024-11-29 13:12:26.564082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.755 [2024-11-29 13:12:26.564107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120112 len:8 PRP1 0x0 PRP2 0x0 00:28:30.755 [2024-11-29 13:12:26.564112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.755 [2024-11-29 13:12:26.564124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.755 [2024-11-29 13:12:26.564128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120120 len:8 PRP1 0x0 PRP2 0x0 00:28:30.755 [2024-11-29 13:12:26.564133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.755 [2024-11-29 13:12:26.564143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.755 [2024-11-29 13:12:26.564147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120128 len:8 PRP1 0x0 PRP2 0x0 00:28:30.755 [2024-11-29 13:12:26.564152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.755 [2024-11-29 13:12:26.564165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.755 [2024-11-29 13:12:26.564169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120136 len:8 PRP1 0x0 PRP2 0x0 00:28:30.755 [2024-11-29 13:12:26.564174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.755 [2024-11-29 13:12:26.564179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120144 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120152 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120160 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120168 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120176 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120184 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120192 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120200 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120208 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120216 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120224 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120232 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120240 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120248 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120256 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120264 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120272 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120280 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120288 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120296 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120304 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120312 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.756 [2024-11-29 13:12:26.564591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.756 [2024-11-29 13:12:26.564595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120320 len:8 PRP1 0x0 PRP2 0x0 00:28:30.756 [2024-11-29 13:12:26.564600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.756 [2024-11-29 13:12:26.564605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120328 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119616 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119624 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119632 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119640 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119648 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119656 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119664 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120336 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120344 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120352 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120360 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120368 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120376 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120384 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120392 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120400 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.564923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.564927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120408 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.564932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.564938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.576163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.576186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120416 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.576195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.576205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.576209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.576214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120424 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.576219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.576225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.576229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.576233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120432 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.576238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.576243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.576247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.576251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120440 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.576255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.576261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.576265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.757 [2024-11-29 13:12:26.576269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120448 len:8 PRP1 0x0 PRP2 0x0 00:28:30.757 [2024-11-29 13:12:26.576275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.757 [2024-11-29 13:12:26.576280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.757 [2024-11-29 13:12:26.576284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120456 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120464 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120472 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120480 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120488 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120496 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120504 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120512 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120520 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120528 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120536 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120544 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120552 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120560 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120568 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120576 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120584 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120592 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120600 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120608 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.758 [2024-11-29 13:12:26.576659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.758 [2024-11-29 13:12:26.576664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120616 len:8 PRP1 0x0 PRP2 0x0 00:28:30.758 [2024-11-29 13:12:26.576669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576705] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:30.758 [2024-11-29 13:12:26.576729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.758 [2024-11-29 13:12:26.576735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.758 [2024-11-29 13:12:26.576747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.758 [2024-11-29 13:12:26.576758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.758 [2024-11-29 13:12:26.576769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.758 [2024-11-29 13:12:26.576774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:28:30.759 [2024-11-29 13:12:26.576807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b9da0 (9): Bad file descriptor 00:28:30.759 [2024-11-29 13:12:26.579329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:28:30.759 [2024-11-29 13:12:26.601972] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:28:30.759 11989.20 IOPS, 46.83 MiB/s [2024-11-29T12:12:33.439Z] 12072.91 IOPS, 47.16 MiB/s [2024-11-29T12:12:33.439Z] 12113.42 IOPS, 47.32 MiB/s [2024-11-29T12:12:33.439Z] 12169.69 IOPS, 47.54 MiB/s [2024-11-29T12:12:33.439Z] 12218.07 IOPS, 47.73 MiB/s 00:28:30.759 Latency(us) 00:28:30.759 [2024-11-29T12:12:33.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.759 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.759 Verification LBA range: start 0x0 length 0x4000 00:28:30.759 NVMe0n1 : 15.01 12242.73 47.82 421.43 0.00 10085.59 549.55 17585.49 00:28:30.759 [2024-11-29T12:12:33.439Z] =================================================================================================================== 00:28:30.759 [2024-11-29T12:12:33.439Z] Total : 12242.73 47.82 421.43 0.00 10085.59 549.55 17585.49 00:28:30.759 Received shutdown signal, test time was about 15.000000 seconds 00:28:30.759 00:28:30.759 Latency(us) 00:28:30.759 [2024-11-29T12:12:33.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.759 [2024-11-29T12:12:33.439Z] =================================================================================================================== 00:28:30.759 [2024-11-29T12:12:33.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1041798 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1041798 /var/tmp/bdevperf.sock 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1041798 ']' 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:30.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.759 13:12:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:31.019 13:12:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.019 13:12:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:31.019 13:12:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:31.280 [2024-11-29 13:12:33.734094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:31.280 13:12:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:31.280 [2024-11-29 13:12:33.918502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:31.280 13:12:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:31.850 NVMe0n1 00:28:31.850 13:12:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:32.110 00:28:32.110 13:12:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:32.369 00:28:32.627 13:12:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:32.627 13:12:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:28:32.627 13:12:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:32.885 13:12:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:36.175 13:12:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:36.175 13:12:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:36.175 13:12:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1043059 00:28:36.175 13:12:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1043059 00:28:36.175 13:12:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:37.112 { 00:28:37.112 "results": [ 00:28:37.112 { 00:28:37.112 "job": "NVMe0n1", 00:28:37.112 "core_mask": "0x1", 00:28:37.112 "workload": "verify", 00:28:37.112 "status": "finished", 00:28:37.112 "verify_range": { 00:28:37.112 "start": 0, 00:28:37.112 "length": 16384 00:28:37.112 }, 00:28:37.112 "queue_depth": 128, 00:28:37.112 "io_size": 4096, 00:28:37.112 "runtime": 1.005097, 00:28:37.112 "iops": 12675.393519232472, 00:28:37.112 "mibps": 49.51325593450184, 00:28:37.112 "io_failed": 0, 00:28:37.112 "io_timeout": 0, 00:28:37.112 "avg_latency_us": 10052.875118785974, 00:28:37.112 "min_latency_us": 1181.0133333333333, 00:28:37.112 "max_latency_us": 9448.106666666667 00:28:37.112 } 00:28:37.112 ], 00:28:37.112 "core_count": 1 00:28:37.112 } 00:28:37.112 13:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:37.112 [2024-11-29 13:12:32.787246] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:28:37.112 [2024-11-29 13:12:32.787327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041798 ] 00:28:37.112 [2024-11-29 13:12:32.871687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.112 [2024-11-29 13:12:32.900346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.112 [2024-11-29 13:12:35.387618] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:37.112 [2024-11-29 13:12:35.387653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.112 [2024-11-29 13:12:35.387662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.112 [2024-11-29 13:12:35.387669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.112 [2024-11-29 13:12:35.387674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.112 [2024-11-29 13:12:35.387680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.112 [2024-11-29 13:12:35.387685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.112 [2024-11-29 13:12:35.387691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.112 [2024-11-29 13:12:35.387696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.112 [2024-11-29 13:12:35.387701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:28:37.112 [2024-11-29 13:12:35.387722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:28:37.112 [2024-11-29 13:12:35.387733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a8da0 (9): Bad file descriptor 00:28:37.112 [2024-11-29 13:12:35.439222] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:28:37.112 Running I/O for 1 seconds... 00:28:37.113 12591.00 IOPS, 49.18 MiB/s 00:28:37.113 Latency(us) 00:28:37.113 [2024-11-29T12:12:39.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.113 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.113 Verification LBA range: start 0x0 length 0x4000 00:28:37.113 NVMe0n1 : 1.01 12675.39 49.51 0.00 0.00 10052.88 1181.01 9448.11 00:28:37.113 [2024-11-29T12:12:39.793Z] =================================================================================================================== 00:28:37.113 [2024-11-29T12:12:39.793Z] Total : 12675.39 49.51 0.00 0.00 10052.88 1181.01 9448.11 00:28:37.113 13:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:37.113 13:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:37.372 13:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:37.630 13:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:37.630 13:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:37.630 13:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:37.889 13:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1041798 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1041798 ']' 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1041798 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1041798 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1041798' 00:28:41.185 killing process with pid 1041798 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1041798 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1041798 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:41.185 13:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.446 rmmod nvme_tcp 00:28:41.446 rmmod nvme_fabrics 00:28:41.446 rmmod nvme_keyring 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1038110 ']' 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1038110 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1038110 ']' 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1038110 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.446 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1038110 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1038110' 00:28:41.706 killing process with pid 1038110 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1038110 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1038110 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.706 13:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.699 13:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.699 00:28:43.699 real 0m40.457s 00:28:43.699 user 2m4.093s 00:28:43.699 sys 0m8.918s 00:28:43.699 13:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.699 13:12:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:43.699 ************************************ 00:28:43.699 END TEST nvmf_failover 00:28:43.699 ************************************ 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.959 ************************************ 00:28:43.959 START TEST nvmf_host_discovery 00:28:43.959 ************************************ 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:43.959 * Looking for test storage... 00:28:43.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.959 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:44.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.221 --rc genhtml_branch_coverage=1 00:28:44.221 --rc genhtml_function_coverage=1 00:28:44.221 --rc genhtml_legend=1 00:28:44.221 --rc geninfo_all_blocks=1 00:28:44.221 --rc geninfo_unexecuted_blocks=1 00:28:44.221 00:28:44.221 ' 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:44.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.221 --rc genhtml_branch_coverage=1 00:28:44.221 --rc genhtml_function_coverage=1 00:28:44.221 --rc genhtml_legend=1 00:28:44.221 --rc geninfo_all_blocks=1 00:28:44.221 --rc geninfo_unexecuted_blocks=1 00:28:44.221 00:28:44.221 ' 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:44.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.221 --rc genhtml_branch_coverage=1 00:28:44.221 --rc genhtml_function_coverage=1 00:28:44.221 --rc genhtml_legend=1 00:28:44.221 --rc geninfo_all_blocks=1 00:28:44.221 --rc geninfo_unexecuted_blocks=1 00:28:44.221 00:28:44.221 ' 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:44.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.221 --rc genhtml_branch_coverage=1 00:28:44.221 --rc genhtml_function_coverage=1 00:28:44.221 --rc genhtml_legend=1 00:28:44.221 --rc geninfo_all_blocks=1 00:28:44.221 --rc geninfo_unexecuted_blocks=1 00:28:44.221 00:28:44.221 ' 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.221 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:44.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.222 13:12:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:52.369 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.369 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:52.370 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:52.370 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:52.370 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:52.370 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:52.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:28:52.370 00:28:52.370 --- 10.0.0.2 ping statistics --- 00:28:52.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.370 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:28:52.370 00:28:52.370 --- 10.0.0.1 ping statistics --- 00:28:52.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.370 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1048181 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1048181 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1048181 ']' 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.370 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.371 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.371 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.371 13:12:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.371 [2024-11-29 13:12:54.234561] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:28:52.371 [2024-11-29 13:12:54.234629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.371 [2024-11-29 13:12:54.334511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.371 [2024-11-29 13:12:54.385009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.371 [2024-11-29 13:12:54.385061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.371 [2024-11-29 13:12:54.385070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.371 [2024-11-29 13:12:54.385077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.371 [2024-11-29 13:12:54.385083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.371 [2024-11-29 13:12:54.385847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.371 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.371 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:52.371 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.371 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.371 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.631 [2024-11-29 13:12:55.094632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.631 [2024-11-29 13:12:55.106906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.631 null0 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.631 null1 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1048507 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1048507 /tmp/host.sock 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1048507 ']' 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:52.631 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.631 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.631 [2024-11-29 13:12:55.204993] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:28:52.631 [2024-11-29 13:12:55.205058] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048507 ] 00:28:52.631 [2024-11-29 13:12:55.297730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.891 [2024-11-29 13:12:55.350469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.462 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.724 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.725 [2024-11-29 13:12:56.362172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:53.725 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:53.987 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:53.988 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.988 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:28:53.988 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:54.560 [2024-11-29 13:12:57.085349] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:54.561 [2024-11-29 13:12:57.085382] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:54.561 [2024-11-29 13:12:57.085396] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:54.561 [2024-11-29 13:12:57.172669] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:54.822 [2024-11-29 13:12:57.395082] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:54.822 [2024-11-29 13:12:57.396012] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb437f0:1 started. 00:28:54.822 [2024-11-29 13:12:57.397629] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:54.822 [2024-11-29 13:12:57.397647] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:54.822 [2024-11-29 13:12:57.402421] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb437f0 was disconnected and freed. delete nvme_qpair. 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.083 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.084 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:55.345 [2024-11-29 13:12:57.809666] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb439d0:1 started. 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:55.345 [2024-11-29 13:12:57.812981] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb439d0 was disconnected and freed. delete nvme_qpair. 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:55.345 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.346 [2024-11-29 13:12:57.914358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:55.346 [2024-11-29 13:12:57.914896] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:55.346 [2024-11-29 13:12:57.914918] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.346 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:55.346 [2024-11-29 13:12:58.004187] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:55.346 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:55.607 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:55.867 [2024-11-29 13:12:58.313821] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:28:55.867 [2024-11-29 13:12:58.313861] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:55.867 [2024-11-29 13:12:58.313870] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:55.867 [2024-11-29 13:12:58.313875] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:56.438 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:56.700 [2024-11-29 13:12:59.190088] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:56.700 [2024-11-29 13:12:59.190110] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:56.700 [2024-11-29 13:12:59.199512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.700 [2024-11-29 13:12:59.199531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.700 [2024-11-29 13:12:59.199542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.700 [2024-11-29 13:12:59.199550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.700 [2024-11-29 13:12:59.199558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.700 [2024-11-29 13:12:59.199570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.700 [2024-11-29 13:12:59.199578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.700 [2024-11-29 13:12:59.199586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.700 [2024-11-29 13:12:59.199593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:56.700 [2024-11-29 13:12:59.209527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.700 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.700 [2024-11-29 13:12:59.219563] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.700 [2024-11-29 13:12:59.219575] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.700 [2024-11-29 13:12:59.219582] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.700 [2024-11-29 13:12:59.219588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.700 [2024-11-29 13:12:59.219606] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.700 [2024-11-29 13:12:59.219926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.700 [2024-11-29 13:12:59.219940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.700 [2024-11-29 13:12:59.219948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.700 [2024-11-29 13:12:59.219960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.701 [2024-11-29 13:12:59.219971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.701 [2024-11-29 13:12:59.219978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.701 [2024-11-29 13:12:59.219986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.701 [2024-11-29 13:12:59.219992] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.701 [2024-11-29 13:12:59.219998] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.701 [2024-11-29 13:12:59.220003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.701 [2024-11-29 13:12:59.229637] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.701 [2024-11-29 13:12:59.229648] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.701 [2024-11-29 13:12:59.229653] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.229657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.701 [2024-11-29 13:12:59.229675] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.229849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.701 [2024-11-29 13:12:59.229863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.701 [2024-11-29 13:12:59.229871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.701 [2024-11-29 13:12:59.229882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.701 [2024-11-29 13:12:59.229893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.701 [2024-11-29 13:12:59.229900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.701 [2024-11-29 13:12:59.229907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.701 [2024-11-29 13:12:59.229913] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.701 [2024-11-29 13:12:59.229918] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.701 [2024-11-29 13:12:59.229922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.701 [2024-11-29 13:12:59.239706] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.701 [2024-11-29 13:12:59.239720] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.701 [2024-11-29 13:12:59.239725] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.239730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.701 [2024-11-29 13:12:59.239745] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.240027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.701 [2024-11-29 13:12:59.240039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.701 [2024-11-29 13:12:59.240047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.701 [2024-11-29 13:12:59.240058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.701 [2024-11-29 13:12:59.240068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.701 [2024-11-29 13:12:59.240075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.701 [2024-11-29 13:12:59.240082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.701 [2024-11-29 13:12:59.240088] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.701 [2024-11-29 13:12:59.240093] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.701 [2024-11-29 13:12:59.240097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:56.701 [2024-11-29 13:12:59.249776] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.701 [2024-11-29 13:12:59.249788] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.701 [2024-11-29 13:12:59.249792] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.249797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.701 [2024-11-29 13:12:59.249811] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.250093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.701 [2024-11-29 13:12:59.250105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.701 [2024-11-29 13:12:59.250112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.701 [2024-11-29 13:12:59.250123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.701 [2024-11-29 13:12:59.250134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.701 [2024-11-29 13:12:59.250140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.701 [2024-11-29 13:12:59.250147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.701 [2024-11-29 13:12:59.250154] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.701 [2024-11-29 13:12:59.250163] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.701 [2024-11-29 13:12:59.250168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:56.701 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:56.701 [2024-11-29 13:12:59.259843] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.701 [2024-11-29 13:12:59.259856] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.701 [2024-11-29 13:12:59.259861] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.259866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.701 [2024-11-29 13:12:59.259881] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.260165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.701 [2024-11-29 13:12:59.260181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.701 [2024-11-29 13:12:59.260189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.701 [2024-11-29 13:12:59.260200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.701 [2024-11-29 13:12:59.260211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.701 [2024-11-29 13:12:59.260217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.701 [2024-11-29 13:12:59.260224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.701 [2024-11-29 13:12:59.260230] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.701 [2024-11-29 13:12:59.260235] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.701 [2024-11-29 13:12:59.260240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.701 [2024-11-29 13:12:59.269912] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.701 [2024-11-29 13:12:59.269923] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.701 [2024-11-29 13:12:59.269927] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.269932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.701 [2024-11-29 13:12:59.269945] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.701 [2024-11-29 13:12:59.270363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.701 [2024-11-29 13:12:59.270401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.701 [2024-11-29 13:12:59.270412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.701 [2024-11-29 13:12:59.270431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.701 [2024-11-29 13:12:59.270458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.701 [2024-11-29 13:12:59.270466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.701 [2024-11-29 13:12:59.270474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.702 [2024-11-29 13:12:59.270482] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.702 [2024-11-29 13:12:59.270487] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.702 [2024-11-29 13:12:59.270492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.702 [2024-11-29 13:12:59.279980] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.702 [2024-11-29 13:12:59.279996] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.702 [2024-11-29 13:12:59.280001] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.702 [2024-11-29 13:12:59.280006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.702 [2024-11-29 13:12:59.280023] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.702 [2024-11-29 13:12:59.280461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.702 [2024-11-29 13:12:59.280499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.702 [2024-11-29 13:12:59.280510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.702 [2024-11-29 13:12:59.280529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.702 [2024-11-29 13:12:59.280553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.702 [2024-11-29 13:12:59.280561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.702 [2024-11-29 13:12:59.280569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.702 [2024-11-29 13:12:59.280577] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.702 [2024-11-29 13:12:59.280582] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.702 [2024-11-29 13:12:59.280587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.702 [2024-11-29 13:12:59.290057] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.702 [2024-11-29 13:12:59.290071] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.702 [2024-11-29 13:12:59.290076] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.702 [2024-11-29 13:12:59.290080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.702 [2024-11-29 13:12:59.290096] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.702 [2024-11-29 13:12:59.290415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.702 [2024-11-29 13:12:59.290429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.702 [2024-11-29 13:12:59.290436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.702 [2024-11-29 13:12:59.290447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.702 [2024-11-29 13:12:59.290458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.702 [2024-11-29 13:12:59.290464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.702 [2024-11-29 13:12:59.290472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.702 [2024-11-29 13:12:59.290478] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.702 [2024-11-29 13:12:59.290483] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.702 [2024-11-29 13:12:59.290487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:56.702 [2024-11-29 13:12:59.300126] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.702 [2024-11-29 13:12:59.300139] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.702 [2024-11-29 13:12:59.300144] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.702 [2024-11-29 13:12:59.300149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.702 [2024-11-29 13:12:59.300166] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.702 [2024-11-29 13:12:59.300509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.702 [2024-11-29 13:12:59.300521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.702 [2024-11-29 13:12:59.300528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.702 [2024-11-29 13:12:59.300539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.702 [2024-11-29 13:12:59.300549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.702 [2024-11-29 13:12:59.300556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.702 [2024-11-29 13:12:59.300562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.702 [2024-11-29 13:12:59.300569] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.702 [2024-11-29 13:12:59.300573] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.702 [2024-11-29 13:12:59.300578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:56.702 [2024-11-29 13:12:59.310198] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:56.702 [2024-11-29 13:12:59.310212] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:56.702 [2024-11-29 13:12:59.310216] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:56.702 [2024-11-29 13:12:59.310221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.702 [2024-11-29 13:12:59.310236] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:56.702 [2024-11-29 13:12:59.310515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.702 [2024-11-29 13:12:59.310532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb13e10 with addr=10.0.0.2, port=4420 00:28:56.702 [2024-11-29 13:12:59.310539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13e10 is same with the state(6) to be set 00:28:56.702 [2024-11-29 13:12:59.310551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb13e10 (9): Bad file descriptor 00:28:56.702 [2024-11-29 13:12:59.310561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.702 [2024-11-29 13:12:59.310567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.702 [2024-11-29 13:12:59.310574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.702 [2024-11-29 13:12:59.310581] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:56.702 [2024-11-29 13:12:59.310586] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:56.702 [2024-11-29 13:12:59.310590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.702 [2024-11-29 13:12:59.317676] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:56.702 [2024-11-29 13:12:59.317694] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:28:56.702 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:58.086 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:58.087 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:58.087 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.087 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.042 [2024-11-29 13:13:01.686329] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:59.042 [2024-11-29 13:13:01.686343] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:59.042 [2024-11-29 13:13:01.686353] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:59.301 [2024-11-29 13:13:01.775611] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:59.301 [2024-11-29 13:13:01.879346] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:28:59.301 [2024-11-29 13:13:01.880035] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xb494a0:1 started. 00:28:59.301 [2024-11-29 13:13:01.881408] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:59.301 [2024-11-29 13:13:01.881430] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.301 [2024-11-29 13:13:01.883323] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xb494a0 was disconnected and freed. delete nvme_qpair. 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.301 request: 00:28:59.301 { 00:28:59.301 "name": "nvme", 00:28:59.301 "trtype": "tcp", 00:28:59.301 "traddr": "10.0.0.2", 00:28:59.301 "adrfam": "ipv4", 00:28:59.301 "trsvcid": "8009", 00:28:59.301 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:59.301 "wait_for_attach": true, 00:28:59.301 "method": "bdev_nvme_start_discovery", 00:28:59.301 "req_id": 1 00:28:59.301 } 00:28:59.301 Got JSON-RPC error response 00:28:59.301 response: 00:28:59.301 { 00:28:59.301 "code": -17, 00:28:59.301 "message": "File exists" 00:28:59.301 } 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.301 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:59.561 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 request: 00:28:59.561 { 00:28:59.561 "name": "nvme_second", 00:28:59.561 "trtype": "tcp", 00:28:59.561 "traddr": "10.0.0.2", 00:28:59.561 "adrfam": "ipv4", 00:28:59.561 "trsvcid": "8009", 00:28:59.561 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:59.561 "wait_for_attach": true, 00:28:59.561 "method": "bdev_nvme_start_discovery", 00:28:59.561 "req_id": 1 00:28:59.561 } 00:28:59.561 Got JSON-RPC error response 00:28:59.561 response: 00:28:59.561 { 00:28:59.561 "code": -17, 00:28:59.561 "message": "File exists" 00:28:59.561 } 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:00.499 [2024-11-29 13:13:03.145220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.499 [2024-11-29 13:13:03.145244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb44370 with addr=10.0.0.2, port=8010 00:29:00.499 [2024-11-29 13:13:03.145254] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:00.499 [2024-11-29 13:13:03.145259] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:00.499 [2024-11-29 13:13:03.145264] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:01.876 [2024-11-29 13:13:04.147596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.876 [2024-11-29 13:13:04.147614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb44370 with addr=10.0.0.2, port=8010 00:29:01.876 [2024-11-29 13:13:04.147622] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:01.876 [2024-11-29 13:13:04.147627] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:01.876 [2024-11-29 13:13:04.147631] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:02.814 [2024-11-29 13:13:05.149598] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:02.814 request: 00:29:02.814 { 00:29:02.814 "name": "nvme_second", 00:29:02.814 "trtype": "tcp", 00:29:02.814 "traddr": "10.0.0.2", 00:29:02.814 "adrfam": "ipv4", 00:29:02.814 "trsvcid": "8010", 00:29:02.814 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:02.814 "wait_for_attach": false, 00:29:02.814 "attach_timeout_ms": 3000, 00:29:02.814 "method": "bdev_nvme_start_discovery", 00:29:02.814 "req_id": 1 00:29:02.814 } 00:29:02.814 Got JSON-RPC error response 00:29:02.814 response: 00:29:02.814 { 00:29:02.814 "code": -110, 00:29:02.814 "message": "Connection timed out" 00:29:02.814 } 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1048507 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.814 rmmod nvme_tcp 00:29:02.814 rmmod nvme_fabrics 00:29:02.814 rmmod nvme_keyring 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1048181 ']' 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1048181 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1048181 ']' 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1048181 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1048181 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1048181' 00:29:02.814 killing process with pid 1048181 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1048181 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1048181 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.814 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.356 00:29:05.356 real 0m21.079s 00:29:05.356 user 0m25.195s 00:29:05.356 sys 0m7.231s 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:05.356 ************************************ 00:29:05.356 END TEST nvmf_host_discovery 00:29:05.356 ************************************ 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.356 ************************************ 00:29:05.356 START TEST nvmf_host_multipath_status 00:29:05.356 ************************************ 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:05.356 * Looking for test storage... 00:29:05.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.356 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:05.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.357 --rc genhtml_branch_coverage=1 00:29:05.357 --rc genhtml_function_coverage=1 00:29:05.357 --rc genhtml_legend=1 00:29:05.357 --rc geninfo_all_blocks=1 00:29:05.357 --rc geninfo_unexecuted_blocks=1 00:29:05.357 00:29:05.357 ' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:05.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.357 --rc genhtml_branch_coverage=1 00:29:05.357 --rc genhtml_function_coverage=1 00:29:05.357 --rc genhtml_legend=1 00:29:05.357 --rc geninfo_all_blocks=1 00:29:05.357 --rc geninfo_unexecuted_blocks=1 00:29:05.357 00:29:05.357 ' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:05.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.357 --rc genhtml_branch_coverage=1 00:29:05.357 --rc genhtml_function_coverage=1 00:29:05.357 --rc genhtml_legend=1 00:29:05.357 --rc geninfo_all_blocks=1 00:29:05.357 --rc geninfo_unexecuted_blocks=1 00:29:05.357 00:29:05.357 ' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:05.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.357 --rc genhtml_branch_coverage=1 00:29:05.357 --rc genhtml_function_coverage=1 00:29:05.357 --rc genhtml_legend=1 00:29:05.357 --rc geninfo_all_blocks=1 00:29:05.357 --rc geninfo_unexecuted_blocks=1 00:29:05.357 00:29:05.357 ' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:05.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:05.357 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.358 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.498 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:13.498 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:13.498 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:13.498 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.498 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:13.499 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:29:13.499 00:29:13.499 --- 10.0.0.2 ping statistics --- 00:29:13.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.499 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:29:13.499 00:29:13.499 --- 10.0.0.1 ping statistics --- 00:29:13.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.499 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1054706 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1054706 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1054706 ']' 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.499 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:13.499 [2024-11-29 13:13:15.431454] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:29:13.499 [2024-11-29 13:13:15.431524] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.499 [2024-11-29 13:13:15.531463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:13.499 [2024-11-29 13:13:15.584247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.499 [2024-11-29 13:13:15.584300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.499 [2024-11-29 13:13:15.584309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.499 [2024-11-29 13:13:15.584316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.499 [2024-11-29 13:13:15.584322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.499 [2024-11-29 13:13:15.585945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.499 [2024-11-29 13:13:15.585947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.760 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.760 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:29:13.760 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.760 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.760 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:13.760 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.760 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1054706 00:29:13.760 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:14.022 [2024-11-29 13:13:16.442562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.022 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:14.022 Malloc0 00:29:14.295 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:14.295 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.562 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.822 [2024-11-29 13:13:17.248243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:14.822 [2024-11-29 13:13:17.444713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1055135 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1055135 /var/tmp/bdevperf.sock 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1055135 ']' 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:14.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.822 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:15.763 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.763 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:29:15.763 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:16.024 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:16.285 Nvme0n1 00:29:16.285 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:16.858 Nvme0n1 00:29:16.858 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:29:16.858 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:18.773 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:29:18.773 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:19.033 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:19.294 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:29:20.255 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:29:20.255 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:20.255 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:20.255 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:20.577 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:20.577 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:20.577 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:20.577 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:20.577 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:20.577 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:20.577 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:20.577 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:20.837 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:20.837 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:20.837 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:20.837 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:20.837 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:20.837 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:20.837 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:20.837 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.097 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:21.097 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:21.097 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:21.097 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:21.357 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:21.357 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:29:21.357 13:13:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:21.618 13:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:21.618 13:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:29:22.559 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:29:22.559 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:22.818 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:22.818 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:22.818 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:22.818 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:22.818 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:22.818 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:23.076 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:23.076 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:23.076 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.076 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:23.336 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:23.336 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:23.336 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.336 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:23.336 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:23.336 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:23.336 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.336 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:23.597 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:23.597 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:23.597 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:23.597 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:23.858 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:23.858 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:29:23.858 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:23.858 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:24.118 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:29:25.058 13:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:29:25.058 13:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:25.058 13:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:25.058 13:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:25.318 13:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:25.318 13:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:25.318 13:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:25.318 13:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:25.577 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:25.577 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:25.577 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:25.577 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:25.578 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:25.578 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:25.578 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:25.578 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:25.836 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:25.836 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:25.836 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:25.836 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:26.095 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:26.095 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:26.095 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:26.095 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:26.355 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:26.355 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:29:26.355 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:26.355 13:13:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:26.618 13:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:29:27.560 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:29:27.560 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:27.560 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.560 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:27.823 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:27.823 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:27.823 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:27.823 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:28.083 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:28.083 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:28.083 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.083 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:28.083 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:28.083 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:28.083 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.083 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:28.344 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:28.344 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:28.344 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:28.344 13:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.604 13:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:28.604 13:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:28.604 13:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:28.604 13:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:28.865 13:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:28.865 13:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:29:28.865 13:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:28.865 13:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:29.126 13:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:29:30.067 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:29:30.067 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:30.067 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.067 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:30.328 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:30.328 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:30.328 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.328 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:30.590 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:30.590 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:30.590 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.590 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:30.590 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:30.590 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:30.590 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.590 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:30.851 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:30.851 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:30.851 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:30.851 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:31.113 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:31.113 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:31.113 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.113 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:31.113 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:31.113 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:29:31.113 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:31.373 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:31.634 13:13:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:29:32.586 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:29:32.586 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:32.586 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.586 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:32.847 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:32.847 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:32.847 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.847 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:32.847 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:32.847 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:32.847 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.847 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:33.109 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.109 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:33.109 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.109 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:33.370 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.370 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:33.370 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.370 13:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:33.370 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:33.370 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:33.370 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.370 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:33.631 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.631 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:29:33.891 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:29:33.891 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:33.891 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:34.152 13:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:29:35.095 13:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:29:35.095 13:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:35.095 13:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.095 13:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:35.357 13:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:35.357 13:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:35.357 13:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.357 13:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:35.618 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:35.618 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:35.618 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:35.618 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.879 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:35.879 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:35.879 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.879 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:35.879 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:35.879 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:35.879 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:35.879 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:36.140 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.140 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:36.140 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.140 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:36.401 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.401 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:29:36.401 13:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:36.401 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:36.662 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:29:37.605 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:29:37.605 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:37.605 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:37.605 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:37.865 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:37.865 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:37.865 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:37.865 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:38.126 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.126 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:38.126 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.126 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:38.126 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.126 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:38.126 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.126 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:38.388 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.388 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:38.388 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.388 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:38.648 13:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.648 13:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:38.648 13:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.648 13:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:38.909 13:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.909 13:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:29:38.909 13:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:38.909 13:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:39.170 13:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:29:40.111 13:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:29:40.111 13:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:40.111 13:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.111 13:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:40.370 13:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:40.370 13:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:40.370 13:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:40.370 13:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.628 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:40.628 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:40.628 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.628 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:40.628 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:40.628 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:40.628 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.628 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:40.888 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:40.888 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:40.888 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.888 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:41.148 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.148 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:41.148 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.148 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:41.409 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.409 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:29:41.409 13:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:41.409 13:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:41.669 13:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:42.615 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:42.615 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:42.615 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:42.615 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:42.875 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:42.875 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:42.875 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:42.875 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:43.134 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:43.134 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:43.134 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.134 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:43.134 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:43.134 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:43.134 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.134 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:43.394 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:43.394 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:43.394 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.394 13:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1055135 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1055135 ']' 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1055135 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:43.655 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.656 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1055135 00:29:43.931 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:43.931 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:43.931 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1055135' 00:29:43.931 killing process with pid 1055135 00:29:43.931 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1055135 00:29:43.931 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1055135 00:29:43.931 { 00:29:43.931 "results": [ 00:29:43.931 { 00:29:43.931 "job": "Nvme0n1", 00:29:43.931 "core_mask": "0x4", 00:29:43.931 "workload": "verify", 00:29:43.931 "status": "terminated", 00:29:43.931 "verify_range": { 00:29:43.931 "start": 0, 00:29:43.931 "length": 16384 00:29:43.931 }, 00:29:43.931 "queue_depth": 128, 00:29:43.931 "io_size": 4096, 00:29:43.931 "runtime": 26.85919, 00:29:43.931 "iops": 11852.740160816466, 00:29:43.931 "mibps": 46.29976625318932, 00:29:43.931 "io_failed": 0, 00:29:43.931 "io_timeout": 0, 00:29:43.931 "avg_latency_us": 10780.936021234156, 00:29:43.931 "min_latency_us": 404.48, 00:29:43.931 "max_latency_us": 3019898.88 00:29:43.931 } 00:29:43.931 ], 00:29:43.931 "core_count": 1 00:29:43.931 } 00:29:43.931 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1055135 00:29:43.931 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:43.931 [2024-11-29 13:13:17.507592] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:29:43.931 [2024-11-29 13:13:17.507659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055135 ] 00:29:43.931 [2024-11-29 13:13:17.598649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.932 [2024-11-29 13:13:17.649809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.932 Running I/O for 90 seconds... 00:29:43.932 10892.00 IOPS, 42.55 MiB/s [2024-11-29T12:13:46.612Z] 10991.00 IOPS, 42.93 MiB/s [2024-11-29T12:13:46.612Z] 11002.33 IOPS, 42.98 MiB/s [2024-11-29T12:13:46.612Z] 11358.25 IOPS, 44.37 MiB/s [2024-11-29T12:13:46.612Z] 11667.20 IOPS, 45.58 MiB/s [2024-11-29T12:13:46.612Z] 11848.67 IOPS, 46.28 MiB/s [2024-11-29T12:13:46.612Z] 12020.43 IOPS, 46.95 MiB/s [2024-11-29T12:13:46.612Z] 12110.38 IOPS, 47.31 MiB/s [2024-11-29T12:13:46.612Z] 12173.89 IOPS, 47.55 MiB/s [2024-11-29T12:13:46.612Z] 12239.10 IOPS, 47.81 MiB/s [2024-11-29T12:13:46.612Z] 12293.18 IOPS, 48.02 MiB/s [2024-11-29T12:13:46.612Z] [2024-11-29 13:13:31.442516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-29 13:13:31.442550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.442579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.442586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.442597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.442603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.442613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.442618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.442629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.442634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.442644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.442649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.442659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.442665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.442675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.442680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.932 [2024-11-29 13:13:31.443512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-29 13:13:31.443529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-29 13:13:31.443819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-29 13:13:31.443841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:43.932 [2024-11-29 13:13:31.443853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-29 13:13:31.443859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.443872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.933 [2024-11-29 13:13:31.443877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.443890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.933 [2024-11-29 13:13:31.443897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.443910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.933 [2024-11-29 13:13:31.443915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.443928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.933 [2024-11-29 13:13:31.443933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.443946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.443951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.443964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.443969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.443982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.443988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:43.933 [2024-11-29 13:13:31.444640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.933 [2024-11-29 13:13:31.444646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.934 [2024-11-29 13:13:31.444784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.444979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.444994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:31.445370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:31.445375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.934 12277.67 IOPS, 47.96 MiB/s [2024-11-29T12:13:46.614Z] 11333.23 IOPS, 44.27 MiB/s [2024-11-29T12:13:46.614Z] 10523.71 IOPS, 41.11 MiB/s [2024-11-29T12:13:46.614Z] 9860.13 IOPS, 38.52 MiB/s [2024-11-29T12:13:46.614Z] 10035.38 IOPS, 39.20 MiB/s [2024-11-29T12:13:46.614Z] 10189.12 IOPS, 39.80 MiB/s [2024-11-29T12:13:46.614Z] 10538.89 IOPS, 41.17 MiB/s [2024-11-29T12:13:46.614Z] 10865.74 IOPS, 42.44 MiB/s [2024-11-29T12:13:46.614Z] 11073.70 IOPS, 43.26 MiB/s [2024-11-29T12:13:46.614Z] 11150.57 IOPS, 43.56 MiB/s [2024-11-29T12:13:46.614Z] 11223.00 IOPS, 43.84 MiB/s [2024-11-29T12:13:46.614Z] 11429.00 IOPS, 44.64 MiB/s [2024-11-29T12:13:46.614Z] 11645.25 IOPS, 45.49 MiB/s [2024-11-29T12:13:46.614Z] [2024-11-29 13:13:44.183100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:44.183137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:44.183153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.934 [2024-11-29 13:13:44.183165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:43.934 [2024-11-29 13:13:44.183175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.935 [2024-11-29 13:13:44.183725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.183846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.183852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.935 [2024-11-29 13:13:44.184901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:43.935 [2024-11-29 13:13:44.184975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.935 [2024-11-29 13:13:44.184981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.184992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.184997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.185988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.185993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.936 [2024-11-29 13:13:44.186248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:43.936 [2024-11-29 13:13:44.186506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.936 [2024-11-29 13:13:44.186515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.186949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.186964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.186980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.186990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.186995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.187073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.937 [2024-11-29 13:13:44.187204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.187595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.937 [2024-11-29 13:13:44.187613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.937 [2024-11-29 13:13:44.187623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.187834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.187849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.187864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.187880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.187897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.187912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.187927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.187938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.187943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.188136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.188153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.188174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.188189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.938 [2024-11-29 13:13:44.188764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.188780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.188796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.188812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.188828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:43.938 [2024-11-29 13:13:44.188838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.938 [2024-11-29 13:13:44.188843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.188853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.188858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.188869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.188874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.188887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.188892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.188989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.188996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.189152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.189174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.189190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.189200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.189206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.190666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.190682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.939 [2024-11-29 13:13:44.190811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.190826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.190842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.190857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.190868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.190873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.191581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.191593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.191604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.191610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:43.939 [2024-11-29 13:13:44.191620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.939 [2024-11-29 13:13:44.191628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.191644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.191659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.191675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.191690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.191705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.191721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.191736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.191752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.191768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.191784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.191799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.191815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.191832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.191847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.191863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.191879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.191889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.191894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.202289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.202309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.202325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.202340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.202356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.202371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.202387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.202406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.202421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.202437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.202452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.202468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.202483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.940 [2024-11-29 13:13:44.202499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.202514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.202530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:43.940 [2024-11-29 13:13:44.202540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.940 [2024-11-29 13:13:44.202545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.202561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.202576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.202593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.202608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.202624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.202640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.202656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.202671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.202687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.202702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.202718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.202733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.202744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.202750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.203536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.203548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.203560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.203567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.203578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.203584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.203594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.203599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.203609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.203615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.203625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.203631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.203641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.203646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.203656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.203662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.203672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.203678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.204129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.204146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.204167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.204182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.204198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.204216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.204232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.204248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.204263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.204279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.204295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.204310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.941 [2024-11-29 13:13:44.204327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.204342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.204358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.941 [2024-11-29 13:13:44.204373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:43.941 [2024-11-29 13:13:44.204384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.204390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.204407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.204422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.204438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.204453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.204469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.204484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.204500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.204515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.204531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.204546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.204556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.204562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.205329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.205348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.205364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.205380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.205395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.205411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.205426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.205442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.205458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.205473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.205489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.205504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.205520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.205531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.205537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.206191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.206254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.942 [2024-11-29 13:13:44.206269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.942 [2024-11-29 13:13:44.206303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.942 [2024-11-29 13:13:44.206313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.206318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.206329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.206334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.206344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.206349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.206360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.206365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.206375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.206380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.206390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.206396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.206952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.206962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.206973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.206979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.206989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.206995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.207983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.207993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.943 [2024-11-29 13:13:44.207999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.208009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.943 [2024-11-29 13:13:44.208014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:43.943 [2024-11-29 13:13:44.208024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.208030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.208045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.208060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.208076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.208840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.208859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.208877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.208895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.208913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.208933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.208951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.208969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.208981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.208988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.209006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.209024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.209042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.209114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.209858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.209876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.209894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.209984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.209995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.210002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.210014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.210021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.210035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.210042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.210054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.210060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.210071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.944 [2024-11-29 13:13:44.210077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.210089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.210095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.210107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.210113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.210125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.944 [2024-11-29 13:13:44.210131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:43.944 [2024-11-29 13:13:44.210143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.210150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.210854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.210865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.210878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.210886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.210898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.210905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.210917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.210923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.210935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.210941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.210953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.210962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.210974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.210980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.210992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.210998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.945 [2024-11-29 13:13:44.211888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.945 [2024-11-29 13:13:44.211906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:43.945 [2024-11-29 13:13:44.211918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.211924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.211936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.211943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.211956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.211964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.212947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.212960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.212973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.212980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.212992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.212999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.213484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.213534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.213540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.215250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.215265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.215279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.946 [2024-11-29 13:13:44.215285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.215297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.215303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.215315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.946 [2024-11-29 13:13:44.215322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:43.946 [2024-11-29 13:13:44.215333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.215902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.215933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.215939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.217081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.947 [2024-11-29 13:13:44.217094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.217107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.217113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.217126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.217132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.217144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.217150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.217178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.947 [2024-11-29 13:13:44.217185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:43.947 [2024-11-29 13:13:44.217197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.217975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.217987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.217992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.218005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.218011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.218023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.218029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.219085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.219108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.219126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.219144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.219168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.219186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.219203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.219222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.948 [2024-11-29 13:13:44.219240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.219258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.219276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.948 [2024-11-29 13:13:44.219294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:43.948 [2024-11-29 13:13:44.219306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.219312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.219386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.219404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.219423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.219441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.219565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.219596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.219674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.219684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.219690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.220871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.220884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.220896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.220904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.220914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.220920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.220930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.220935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.220945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.220950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.220960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.220965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.220975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.220980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.220990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.949 [2024-11-29 13:13:44.220995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.221005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.221011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.221021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.221026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.221036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.949 [2024-11-29 13:13:44.221041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:43.949 [2024-11-29 13:13:44.221051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.221056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.221338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.221354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.221445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.221461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.221491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.221508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.221524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.221534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.221540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.950 [2024-11-29 13:13:44.222647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.950 [2024-11-29 13:13:44.222724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:43.950 [2024-11-29 13:13:44.222734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.222741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.222757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.222772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.222787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.222803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.222819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.222835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.222851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.222867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.222883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.222899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.222914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.222930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.222947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.222963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.222974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.222979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.224556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.224750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.224766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.224798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.224814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.224830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.224872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.224878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.225738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.951 [2024-11-29 13:13:44.225752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.225764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.951 [2024-11-29 13:13:44.225769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.951 [2024-11-29 13:13:44.225780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.225833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.225880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.225896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.225912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.225987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.225992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.226054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.226070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.226086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.226102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.226569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.226584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.226600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.952 [2024-11-29 13:13:44.226616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.952 [2024-11-29 13:13:44.226713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:43.952 [2024-11-29 13:13:44.226723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.226728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.226739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.226744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.226754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.226760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.226770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.226776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.953 [2024-11-29 13:13:44.228805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.953 [2024-11-29 13:13:44.228853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.953 [2024-11-29 13:13:44.228864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.228869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.228880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.228886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.228896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.228901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.228911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.228917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.228927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.228932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.228943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.228948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.228959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.228964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.228974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.228980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.228990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.228995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.229482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.229498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.229514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.229530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.229546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.229562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.229577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.229593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.229736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.229778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.229783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.231264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.231277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.231289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.231295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.231308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.231313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.231324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.954 [2024-11-29 13:13:44.231329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.231339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.231344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.231355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.231360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:43.954 [2024-11-29 13:13:44.231371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-11-29 13:13:44.231376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.955 [2024-11-29 13:13:44.231977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:43.955 [2024-11-29 13:13:44.231988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.955 [2024-11-29 13:13:44.231993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.232004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.232009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.232019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.232024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.232034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.232040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.232050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.232055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.232066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.232071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.233933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.233949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.233970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.233976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.233987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.233992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.956 [2024-11-29 13:13:44.234460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:43.956 [2024-11-29 13:13:44.234486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.956 [2024-11-29 13:13:44.234492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.234503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.234508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.234518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.234524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.234534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.234541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.234551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.234557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.234567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.234573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.234584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.234589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.234599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.234605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.234615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.234621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.234632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.234638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.235117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.235134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.235150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.235285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.235300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.235316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.235332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.957 [2024-11-29 13:13:44.235380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.235396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:43.957 [2024-11-29 13:13:44.235408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.957 [2024-11-29 13:13:44.235414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:43.957 11794.44 IOPS, 46.07 MiB/s [2024-11-29T12:13:46.637Z] 11830.12 IOPS, 46.21 MiB/s [2024-11-29T12:13:46.637Z] Received shutdown signal, test time was about 26.859801 seconds 00:29:43.957 00:29:43.957 Latency(us) 00:29:43.957 [2024-11-29T12:13:46.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.957 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:43.957 Verification LBA range: start 0x0 length 0x4000 00:29:43.957 Nvme0n1 : 26.86 11852.74 46.30 0.00 0.00 10780.94 404.48 3019898.88 00:29:43.957 [2024-11-29T12:13:46.637Z] =================================================================================================================== 00:29:43.957 [2024-11-29T12:13:46.637Z] Total : 11852.74 46.30 0.00 0.00 10780.94 404.48 3019898.88 00:29:43.957 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.335 rmmod nvme_tcp 00:29:44.335 rmmod nvme_fabrics 00:29:44.335 rmmod nvme_keyring 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1054706 ']' 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1054706 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1054706 ']' 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1054706 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1054706 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1054706' 00:29:44.335 killing process with pid 1054706 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1054706 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1054706 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.335 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.346 13:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:46.346 00:29:46.346 real 0m41.397s 00:29:46.346 user 1m47.047s 00:29:46.346 sys 0m11.643s 00:29:46.346 13:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.346 13:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:46.346 ************************************ 00:29:46.346 END TEST nvmf_host_multipath_status 00:29:46.346 ************************************ 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.607 ************************************ 00:29:46.607 START TEST nvmf_discovery_remove_ifc 00:29:46.607 ************************************ 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:46.607 * Looking for test storage... 00:29:46.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:29:46.607 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.868 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:29:46.868 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:29:46.868 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.868 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:29:46.868 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.868 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.868 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.869 --rc genhtml_branch_coverage=1 00:29:46.869 --rc genhtml_function_coverage=1 00:29:46.869 --rc genhtml_legend=1 00:29:46.869 --rc geninfo_all_blocks=1 00:29:46.869 --rc geninfo_unexecuted_blocks=1 00:29:46.869 00:29:46.869 ' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.869 --rc genhtml_branch_coverage=1 00:29:46.869 --rc genhtml_function_coverage=1 00:29:46.869 --rc genhtml_legend=1 00:29:46.869 --rc geninfo_all_blocks=1 00:29:46.869 --rc geninfo_unexecuted_blocks=1 00:29:46.869 00:29:46.869 ' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.869 --rc genhtml_branch_coverage=1 00:29:46.869 --rc genhtml_function_coverage=1 00:29:46.869 --rc genhtml_legend=1 00:29:46.869 --rc geninfo_all_blocks=1 00:29:46.869 --rc geninfo_unexecuted_blocks=1 00:29:46.869 00:29:46.869 ' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.869 --rc genhtml_branch_coverage=1 00:29:46.869 --rc genhtml_function_coverage=1 00:29:46.869 --rc genhtml_legend=1 00:29:46.869 --rc geninfo_all_blocks=1 00:29:46.869 --rc geninfo_unexecuted_blocks=1 00:29:46.869 00:29:46.869 ' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:46.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.869 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.870 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.870 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.870 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.870 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.870 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.870 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.870 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.870 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.870 13:13:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:55.018 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:55.018 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:55.018 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:55.018 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:29:55.018 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:55.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:55.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:29:55.019 00:29:55.019 --- 10.0.0.2 ping statistics --- 00:29:55.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.019 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:55.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:55.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:29:55.019 00:29:55.019 --- 10.0.0.1 ping statistics --- 00:29:55.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.019 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1065272 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1065272 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1065272 ']' 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.019 13:13:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:55.019 [2024-11-29 13:13:56.855360] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:29:55.019 [2024-11-29 13:13:56.855429] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.019 [2024-11-29 13:13:56.954986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.019 [2024-11-29 13:13:57.004868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.019 [2024-11-29 13:13:57.004918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.019 [2024-11-29 13:13:57.004927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:55.019 [2024-11-29 13:13:57.004934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:55.019 [2024-11-29 13:13:57.004940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.019 [2024-11-29 13:13:57.005693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.019 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.019 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:55.019 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:55.019 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:55.019 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:55.280 [2024-11-29 13:13:57.742222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.280 [2024-11-29 13:13:57.750495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:55.280 null0 00:29:55.280 [2024-11-29 13:13:57.782429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1065382 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1065382 /tmp/host.sock 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1065382 ']' 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:55.280 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.280 13:13:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:55.280 [2024-11-29 13:13:57.860796] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:29:55.280 [2024-11-29 13:13:57.860861] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065382 ] 00:29:55.280 [2024-11-29 13:13:57.954016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.541 [2024-11-29 13:13:58.007401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.113 13:13:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:57.499 [2024-11-29 13:13:59.831108] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:57.499 [2024-11-29 13:13:59.831128] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:57.499 [2024-11-29 13:13:59.831140] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:57.499 [2024-11-29 13:13:59.959565] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:57.499 [2024-11-29 13:14:00.019314] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:57.499 [2024-11-29 13:14:00.020265] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x59b410:1 started. 00:29:57.499 [2024-11-29 13:14:00.021834] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:57.499 [2024-11-29 13:14:00.021878] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:57.499 [2024-11-29 13:14:00.021900] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:57.499 [2024-11-29 13:14:00.021915] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:57.499 [2024-11-29 13:14:00.021935] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:57.499 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.500 [2024-11-29 13:14:00.029832] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x59b410 was disconnected and freed. delete nvme_qpair. 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:57.500 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:57.761 13:14:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:58.703 13:14:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:59.645 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:59.645 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.645 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:59.645 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:59.645 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.645 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:59.645 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:59.906 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.906 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:59.906 13:14:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:00.848 13:14:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:01.788 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:01.788 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:01.789 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:01.789 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.789 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:01.789 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:01.789 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:01.789 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.049 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:02.049 13:14:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:02.990 [2024-11-29 13:14:05.462525] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:02.990 [2024-11-29 13:14:05.462560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.990 [2024-11-29 13:14:05.462568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.990 [2024-11-29 13:14:05.462576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.990 [2024-11-29 13:14:05.462581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.990 [2024-11-29 13:14:05.462588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.990 [2024-11-29 13:14:05.462596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.990 [2024-11-29 13:14:05.462603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.990 [2024-11-29 13:14:05.462608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.990 [2024-11-29 13:14:05.462614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:02.990 [2024-11-29 13:14:05.462619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.990 [2024-11-29 13:14:05.462624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577c50 is same with the state(6) to be set 00:30:02.990 [2024-11-29 13:14:05.472546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577c50 (9): Bad file descriptor 00:30:02.990 13:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:02.990 13:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:02.990 13:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:02.990 13:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.990 13:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:02.990 13:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:02.990 13:14:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:02.990 [2024-11-29 13:14:05.482580] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:02.990 [2024-11-29 13:14:05.482591] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:02.990 [2024-11-29 13:14:05.482596] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:02.990 [2024-11-29 13:14:05.482600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:02.990 [2024-11-29 13:14:05.482616] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:03.931 [2024-11-29 13:14:06.491238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:03.931 [2024-11-29 13:14:06.491333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577c50 with addr=10.0.0.2, port=4420 00:30:03.931 [2024-11-29 13:14:06.491364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577c50 is same with the state(6) to be set 00:30:03.931 [2024-11-29 13:14:06.491419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577c50 (9): Bad file descriptor 00:30:03.931 [2024-11-29 13:14:06.492543] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:30:03.931 [2024-11-29 13:14:06.492613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:03.931 [2024-11-29 13:14:06.492636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:03.931 [2024-11-29 13:14:06.492659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:03.931 [2024-11-29 13:14:06.492680] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:03.931 [2024-11-29 13:14:06.492696] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:03.931 [2024-11-29 13:14:06.492709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:03.931 [2024-11-29 13:14:06.492743] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:03.931 [2024-11-29 13:14:06.492757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:03.931 13:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.931 13:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:03.931 13:14:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:04.873 [2024-11-29 13:14:07.495178] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:04.873 [2024-11-29 13:14:07.495192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:04.873 [2024-11-29 13:14:07.495200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:04.873 [2024-11-29 13:14:07.495205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:04.873 [2024-11-29 13:14:07.495210] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:30:04.873 [2024-11-29 13:14:07.495215] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:04.873 [2024-11-29 13:14:07.495219] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:04.873 [2024-11-29 13:14:07.495222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:04.873 [2024-11-29 13:14:07.495238] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:04.873 [2024-11-29 13:14:07.495255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.873 [2024-11-29 13:14:07.495261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.873 [2024-11-29 13:14:07.495268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.873 [2024-11-29 13:14:07.495273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.873 [2024-11-29 13:14:07.495279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.873 [2024-11-29 13:14:07.495284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.873 [2024-11-29 13:14:07.495290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.873 [2024-11-29 13:14:07.495295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.873 [2024-11-29 13:14:07.495301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.873 [2024-11-29 13:14:07.495306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.873 [2024-11-29 13:14:07.495311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:30:04.873 [2024-11-29 13:14:07.495700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x567350 (9): Bad file descriptor 00:30:04.873 [2024-11-29 13:14:07.496710] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:04.873 [2024-11-29 13:14:07.496718] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:30:04.873 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:04.873 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:04.873 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:04.873 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.873 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:04.873 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:04.873 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:04.873 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.133 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:05.133 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.133 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.133 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:05.133 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:05.134 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:05.134 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:05.134 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.134 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:05.134 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:05.134 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:05.134 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.134 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:05.134 13:14:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:06.075 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:06.075 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:06.075 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:06.075 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.075 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:06.075 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:06.075 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:06.075 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.335 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:06.335 13:14:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:06.905 [2024-11-29 13:14:09.557333] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:06.905 [2024-11-29 13:14:09.557346] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:06.905 [2024-11-29 13:14:09.557355] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:07.165 [2024-11-29 13:14:09.645628] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:07.165 [2024-11-29 13:14:09.703275] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:30:07.165 [2024-11-29 13:14:09.703932] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x551020:1 started. 00:30:07.165 [2024-11-29 13:14:09.704852] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:07.165 [2024-11-29 13:14:09.704878] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:07.165 [2024-11-29 13:14:09.704893] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:07.165 [2024-11-29 13:14:09.704904] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:07.165 [2024-11-29 13:14:09.704909] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:07.165 [2024-11-29 13:14:09.713856] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x551020 was disconnected and freed. delete nvme_qpair. 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1065382 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1065382 ']' 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1065382 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.165 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1065382 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1065382' 00:30:07.426 killing process with pid 1065382 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1065382 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1065382 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.426 13:14:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.426 rmmod nvme_tcp 00:30:07.426 rmmod nvme_fabrics 00:30:07.426 rmmod nvme_keyring 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1065272 ']' 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1065272 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1065272 ']' 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1065272 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.426 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1065272 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1065272' 00:30:07.686 killing process with pid 1065272 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1065272 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1065272 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.686 13:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.228 13:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.228 00:30:10.228 real 0m23.237s 00:30:10.228 user 0m27.245s 00:30:10.228 sys 0m7.028s 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:10.229 ************************************ 00:30:10.229 END TEST nvmf_discovery_remove_ifc 00:30:10.229 ************************************ 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.229 ************************************ 00:30:10.229 START TEST nvmf_identify_kernel_target 00:30:10.229 ************************************ 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:10.229 * Looking for test storage... 00:30:10.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:10.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.229 --rc genhtml_branch_coverage=1 00:30:10.229 --rc genhtml_function_coverage=1 00:30:10.229 --rc genhtml_legend=1 00:30:10.229 --rc geninfo_all_blocks=1 00:30:10.229 --rc geninfo_unexecuted_blocks=1 00:30:10.229 00:30:10.229 ' 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:10.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.229 --rc genhtml_branch_coverage=1 00:30:10.229 --rc genhtml_function_coverage=1 00:30:10.229 --rc genhtml_legend=1 00:30:10.229 --rc geninfo_all_blocks=1 00:30:10.229 --rc geninfo_unexecuted_blocks=1 00:30:10.229 00:30:10.229 ' 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:10.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.229 --rc genhtml_branch_coverage=1 00:30:10.229 --rc genhtml_function_coverage=1 00:30:10.229 --rc genhtml_legend=1 00:30:10.229 --rc geninfo_all_blocks=1 00:30:10.229 --rc geninfo_unexecuted_blocks=1 00:30:10.229 00:30:10.229 ' 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:10.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.229 --rc genhtml_branch_coverage=1 00:30:10.229 --rc genhtml_function_coverage=1 00:30:10.229 --rc genhtml_legend=1 00:30:10.229 --rc geninfo_all_blocks=1 00:30:10.229 --rc geninfo_unexecuted_blocks=1 00:30:10.229 00:30:10.229 ' 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.229 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:10.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.230 13:14:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:18.372 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:18.372 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:18.372 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.372 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:18.373 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.373 13:14:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:30:18.373 00:30:18.373 --- 10.0.0.2 ping statistics --- 00:30:18.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.373 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:30:18.373 00:30:18.373 --- 10.0.0.1 ping statistics --- 00:30:18.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.373 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:18.373 13:14:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:20.920 Waiting for block devices as requested 00:30:21.181 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:21.181 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:21.181 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:21.442 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:21.442 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:21.442 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:21.703 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:21.703 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:21.703 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:21.964 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:21.964 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:22.225 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:22.225 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:22.225 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:22.485 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:22.485 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:22.485 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:23.057 No valid GPT data, bailing 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:30:23.057 00:30:23.057 Discovery Log Number of Records 2, Generation counter 2 00:30:23.057 =====Discovery Log Entry 0====== 00:30:23.057 trtype: tcp 00:30:23.057 adrfam: ipv4 00:30:23.057 subtype: current discovery subsystem 00:30:23.057 treq: not specified, sq flow control disable supported 00:30:23.057 portid: 1 00:30:23.057 trsvcid: 4420 00:30:23.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:23.057 traddr: 10.0.0.1 00:30:23.057 eflags: none 00:30:23.057 sectype: none 00:30:23.057 =====Discovery Log Entry 1====== 00:30:23.057 trtype: tcp 00:30:23.057 adrfam: ipv4 00:30:23.057 subtype: nvme subsystem 00:30:23.057 treq: not specified, sq flow control disable supported 00:30:23.057 portid: 1 00:30:23.057 trsvcid: 4420 00:30:23.057 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:23.057 traddr: 10.0.0.1 00:30:23.057 eflags: none 00:30:23.057 sectype: none 00:30:23.057 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:30:23.057 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:23.057 ===================================================== 00:30:23.057 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:23.057 ===================================================== 00:30:23.057 Controller Capabilities/Features 00:30:23.057 ================================ 00:30:23.057 Vendor ID: 0000 00:30:23.057 Subsystem Vendor ID: 0000 00:30:23.057 Serial Number: edf63173f47e1bf82ec7 00:30:23.057 Model Number: Linux 00:30:23.057 Firmware Version: 6.8.9-20 00:30:23.057 Recommended Arb Burst: 0 00:30:23.057 IEEE OUI Identifier: 00 00 00 00:30:23.057 Multi-path I/O 00:30:23.057 May have multiple subsystem ports: No 00:30:23.057 May have multiple controllers: No 00:30:23.057 Associated with SR-IOV VF: No 00:30:23.057 Max Data Transfer Size: Unlimited 00:30:23.057 Max Number of Namespaces: 0 00:30:23.057 Max Number of I/O Queues: 1024 00:30:23.057 NVMe Specification Version (VS): 1.3 00:30:23.057 NVMe Specification Version (Identify): 1.3 00:30:23.057 Maximum Queue Entries: 1024 00:30:23.057 Contiguous Queues Required: No 00:30:23.057 Arbitration Mechanisms Supported 00:30:23.057 Weighted Round Robin: Not Supported 00:30:23.057 Vendor Specific: Not Supported 00:30:23.057 Reset Timeout: 7500 ms 00:30:23.057 Doorbell Stride: 4 bytes 00:30:23.057 NVM Subsystem Reset: Not Supported 00:30:23.057 Command Sets Supported 00:30:23.057 NVM Command Set: Supported 00:30:23.058 Boot Partition: Not Supported 00:30:23.058 Memory Page Size Minimum: 4096 bytes 00:30:23.058 Memory Page Size Maximum: 4096 bytes 00:30:23.058 Persistent Memory Region: Not Supported 00:30:23.058 Optional Asynchronous Events Supported 00:30:23.058 Namespace Attribute Notices: Not Supported 00:30:23.058 Firmware Activation Notices: Not Supported 00:30:23.058 ANA Change Notices: Not Supported 00:30:23.058 PLE Aggregate Log Change Notices: Not Supported 00:30:23.058 LBA Status Info Alert Notices: Not Supported 00:30:23.058 EGE Aggregate Log Change Notices: Not Supported 00:30:23.058 Normal NVM Subsystem Shutdown event: Not Supported 00:30:23.058 Zone Descriptor Change Notices: Not Supported 00:30:23.058 Discovery Log Change Notices: Supported 00:30:23.058 Controller Attributes 00:30:23.058 128-bit Host Identifier: Not Supported 00:30:23.058 Non-Operational Permissive Mode: Not Supported 00:30:23.058 NVM Sets: Not Supported 00:30:23.058 Read Recovery Levels: Not Supported 00:30:23.058 Endurance Groups: Not Supported 00:30:23.058 Predictable Latency Mode: Not Supported 00:30:23.058 Traffic Based Keep ALive: Not Supported 00:30:23.058 Namespace Granularity: Not Supported 00:30:23.058 SQ Associations: Not Supported 00:30:23.058 UUID List: Not Supported 00:30:23.058 Multi-Domain Subsystem: Not Supported 00:30:23.058 Fixed Capacity Management: Not Supported 00:30:23.058 Variable Capacity Management: Not Supported 00:30:23.058 Delete Endurance Group: Not Supported 00:30:23.058 Delete NVM Set: Not Supported 00:30:23.058 Extended LBA Formats Supported: Not Supported 00:30:23.058 Flexible Data Placement Supported: Not Supported 00:30:23.058 00:30:23.058 Controller Memory Buffer Support 00:30:23.058 ================================ 00:30:23.058 Supported: No 00:30:23.058 00:30:23.058 Persistent Memory Region Support 00:30:23.058 ================================ 00:30:23.058 Supported: No 00:30:23.058 00:30:23.058 Admin Command Set Attributes 00:30:23.058 ============================ 00:30:23.058 Security Send/Receive: Not Supported 00:30:23.058 Format NVM: Not Supported 00:30:23.058 Firmware Activate/Download: Not Supported 00:30:23.058 Namespace Management: Not Supported 00:30:23.058 Device Self-Test: Not Supported 00:30:23.058 Directives: Not Supported 00:30:23.058 NVMe-MI: Not Supported 00:30:23.058 Virtualization Management: Not Supported 00:30:23.058 Doorbell Buffer Config: Not Supported 00:30:23.058 Get LBA Status Capability: Not Supported 00:30:23.058 Command & Feature Lockdown Capability: Not Supported 00:30:23.058 Abort Command Limit: 1 00:30:23.058 Async Event Request Limit: 1 00:30:23.058 Number of Firmware Slots: N/A 00:30:23.058 Firmware Slot 1 Read-Only: N/A 00:30:23.058 Firmware Activation Without Reset: N/A 00:30:23.058 Multiple Update Detection Support: N/A 00:30:23.058 Firmware Update Granularity: No Information Provided 00:30:23.058 Per-Namespace SMART Log: No 00:30:23.058 Asymmetric Namespace Access Log Page: Not Supported 00:30:23.058 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:23.058 Command Effects Log Page: Not Supported 00:30:23.058 Get Log Page Extended Data: Supported 00:30:23.058 Telemetry Log Pages: Not Supported 00:30:23.058 Persistent Event Log Pages: Not Supported 00:30:23.058 Supported Log Pages Log Page: May Support 00:30:23.058 Commands Supported & Effects Log Page: Not Supported 00:30:23.058 Feature Identifiers & Effects Log Page:May Support 00:30:23.058 NVMe-MI Commands & Effects Log Page: May Support 00:30:23.058 Data Area 4 for Telemetry Log: Not Supported 00:30:23.058 Error Log Page Entries Supported: 1 00:30:23.058 Keep Alive: Not Supported 00:30:23.058 00:30:23.058 NVM Command Set Attributes 00:30:23.058 ========================== 00:30:23.058 Submission Queue Entry Size 00:30:23.058 Max: 1 00:30:23.058 Min: 1 00:30:23.058 Completion Queue Entry Size 00:30:23.058 Max: 1 00:30:23.058 Min: 1 00:30:23.058 Number of Namespaces: 0 00:30:23.058 Compare Command: Not Supported 00:30:23.058 Write Uncorrectable Command: Not Supported 00:30:23.058 Dataset Management Command: Not Supported 00:30:23.058 Write Zeroes Command: Not Supported 00:30:23.058 Set Features Save Field: Not Supported 00:30:23.058 Reservations: Not Supported 00:30:23.058 Timestamp: Not Supported 00:30:23.058 Copy: Not Supported 00:30:23.058 Volatile Write Cache: Not Present 00:30:23.058 Atomic Write Unit (Normal): 1 00:30:23.058 Atomic Write Unit (PFail): 1 00:30:23.058 Atomic Compare & Write Unit: 1 00:30:23.058 Fused Compare & Write: Not Supported 00:30:23.058 Scatter-Gather List 00:30:23.058 SGL Command Set: Supported 00:30:23.058 SGL Keyed: Not Supported 00:30:23.058 SGL Bit Bucket Descriptor: Not Supported 00:30:23.058 SGL Metadata Pointer: Not Supported 00:30:23.058 Oversized SGL: Not Supported 00:30:23.058 SGL Metadata Address: Not Supported 00:30:23.058 SGL Offset: Supported 00:30:23.058 Transport SGL Data Block: Not Supported 00:30:23.058 Replay Protected Memory Block: Not Supported 00:30:23.058 00:30:23.058 Firmware Slot Information 00:30:23.058 ========================= 00:30:23.058 Active slot: 0 00:30:23.058 00:30:23.058 00:30:23.058 Error Log 00:30:23.058 ========= 00:30:23.058 00:30:23.058 Active Namespaces 00:30:23.058 ================= 00:30:23.058 Discovery Log Page 00:30:23.058 ================== 00:30:23.058 Generation Counter: 2 00:30:23.058 Number of Records: 2 00:30:23.058 Record Format: 0 00:30:23.058 00:30:23.058 Discovery Log Entry 0 00:30:23.058 ---------------------- 00:30:23.058 Transport Type: 3 (TCP) 00:30:23.058 Address Family: 1 (IPv4) 00:30:23.058 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:23.058 Entry Flags: 00:30:23.058 Duplicate Returned Information: 0 00:30:23.058 Explicit Persistent Connection Support for Discovery: 0 00:30:23.058 Transport Requirements: 00:30:23.058 Secure Channel: Not Specified 00:30:23.058 Port ID: 1 (0x0001) 00:30:23.058 Controller ID: 65535 (0xffff) 00:30:23.058 Admin Max SQ Size: 32 00:30:23.058 Transport Service Identifier: 4420 00:30:23.058 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:23.058 Transport Address: 10.0.0.1 00:30:23.058 Discovery Log Entry 1 00:30:23.058 ---------------------- 00:30:23.058 Transport Type: 3 (TCP) 00:30:23.058 Address Family: 1 (IPv4) 00:30:23.058 Subsystem Type: 2 (NVM Subsystem) 00:30:23.058 Entry Flags: 00:30:23.058 Duplicate Returned Information: 0 00:30:23.058 Explicit Persistent Connection Support for Discovery: 0 00:30:23.058 Transport Requirements: 00:30:23.058 Secure Channel: Not Specified 00:30:23.058 Port ID: 1 (0x0001) 00:30:23.058 Controller ID: 65535 (0xffff) 00:30:23.058 Admin Max SQ Size: 32 00:30:23.058 Transport Service Identifier: 4420 00:30:23.058 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:23.058 Transport Address: 10.0.0.1 00:30:23.058 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:23.320 get_feature(0x01) failed 00:30:23.320 get_feature(0x02) failed 00:30:23.320 get_feature(0x04) failed 00:30:23.320 ===================================================== 00:30:23.320 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:23.320 ===================================================== 00:30:23.320 Controller Capabilities/Features 00:30:23.320 ================================ 00:30:23.320 Vendor ID: 0000 00:30:23.320 Subsystem Vendor ID: 0000 00:30:23.320 Serial Number: abcb248b4c2bb16c6981 00:30:23.320 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:23.320 Firmware Version: 6.8.9-20 00:30:23.320 Recommended Arb Burst: 6 00:30:23.320 IEEE OUI Identifier: 00 00 00 00:30:23.320 Multi-path I/O 00:30:23.320 May have multiple subsystem ports: Yes 00:30:23.320 May have multiple controllers: Yes 00:30:23.320 Associated with SR-IOV VF: No 00:30:23.320 Max Data Transfer Size: Unlimited 00:30:23.320 Max Number of Namespaces: 1024 00:30:23.320 Max Number of I/O Queues: 128 00:30:23.320 NVMe Specification Version (VS): 1.3 00:30:23.320 NVMe Specification Version (Identify): 1.3 00:30:23.320 Maximum Queue Entries: 1024 00:30:23.320 Contiguous Queues Required: No 00:30:23.320 Arbitration Mechanisms Supported 00:30:23.320 Weighted Round Robin: Not Supported 00:30:23.320 Vendor Specific: Not Supported 00:30:23.320 Reset Timeout: 7500 ms 00:30:23.320 Doorbell Stride: 4 bytes 00:30:23.320 NVM Subsystem Reset: Not Supported 00:30:23.320 Command Sets Supported 00:30:23.320 NVM Command Set: Supported 00:30:23.320 Boot Partition: Not Supported 00:30:23.320 Memory Page Size Minimum: 4096 bytes 00:30:23.320 Memory Page Size Maximum: 4096 bytes 00:30:23.320 Persistent Memory Region: Not Supported 00:30:23.320 Optional Asynchronous Events Supported 00:30:23.320 Namespace Attribute Notices: Supported 00:30:23.320 Firmware Activation Notices: Not Supported 00:30:23.320 ANA Change Notices: Supported 00:30:23.320 PLE Aggregate Log Change Notices: Not Supported 00:30:23.320 LBA Status Info Alert Notices: Not Supported 00:30:23.320 EGE Aggregate Log Change Notices: Not Supported 00:30:23.320 Normal NVM Subsystem Shutdown event: Not Supported 00:30:23.320 Zone Descriptor Change Notices: Not Supported 00:30:23.320 Discovery Log Change Notices: Not Supported 00:30:23.320 Controller Attributes 00:30:23.320 128-bit Host Identifier: Supported 00:30:23.320 Non-Operational Permissive Mode: Not Supported 00:30:23.320 NVM Sets: Not Supported 00:30:23.320 Read Recovery Levels: Not Supported 00:30:23.320 Endurance Groups: Not Supported 00:30:23.320 Predictable Latency Mode: Not Supported 00:30:23.320 Traffic Based Keep ALive: Supported 00:30:23.320 Namespace Granularity: Not Supported 00:30:23.320 SQ Associations: Not Supported 00:30:23.320 UUID List: Not Supported 00:30:23.320 Multi-Domain Subsystem: Not Supported 00:30:23.320 Fixed Capacity Management: Not Supported 00:30:23.320 Variable Capacity Management: Not Supported 00:30:23.320 Delete Endurance Group: Not Supported 00:30:23.320 Delete NVM Set: Not Supported 00:30:23.320 Extended LBA Formats Supported: Not Supported 00:30:23.320 Flexible Data Placement Supported: Not Supported 00:30:23.320 00:30:23.320 Controller Memory Buffer Support 00:30:23.320 ================================ 00:30:23.320 Supported: No 00:30:23.320 00:30:23.320 Persistent Memory Region Support 00:30:23.320 ================================ 00:30:23.320 Supported: No 00:30:23.320 00:30:23.320 Admin Command Set Attributes 00:30:23.320 ============================ 00:30:23.320 Security Send/Receive: Not Supported 00:30:23.320 Format NVM: Not Supported 00:30:23.320 Firmware Activate/Download: Not Supported 00:30:23.320 Namespace Management: Not Supported 00:30:23.320 Device Self-Test: Not Supported 00:30:23.320 Directives: Not Supported 00:30:23.320 NVMe-MI: Not Supported 00:30:23.320 Virtualization Management: Not Supported 00:30:23.320 Doorbell Buffer Config: Not Supported 00:30:23.320 Get LBA Status Capability: Not Supported 00:30:23.320 Command & Feature Lockdown Capability: Not Supported 00:30:23.320 Abort Command Limit: 4 00:30:23.320 Async Event Request Limit: 4 00:30:23.320 Number of Firmware Slots: N/A 00:30:23.320 Firmware Slot 1 Read-Only: N/A 00:30:23.320 Firmware Activation Without Reset: N/A 00:30:23.320 Multiple Update Detection Support: N/A 00:30:23.320 Firmware Update Granularity: No Information Provided 00:30:23.320 Per-Namespace SMART Log: Yes 00:30:23.320 Asymmetric Namespace Access Log Page: Supported 00:30:23.320 ANA Transition Time : 10 sec 00:30:23.320 00:30:23.320 Asymmetric Namespace Access Capabilities 00:30:23.320 ANA Optimized State : Supported 00:30:23.320 ANA Non-Optimized State : Supported 00:30:23.321 ANA Inaccessible State : Supported 00:30:23.321 ANA Persistent Loss State : Supported 00:30:23.321 ANA Change State : Supported 00:30:23.321 ANAGRPID is not changed : No 00:30:23.321 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:23.321 00:30:23.321 ANA Group Identifier Maximum : 128 00:30:23.321 Number of ANA Group Identifiers : 128 00:30:23.321 Max Number of Allowed Namespaces : 1024 00:30:23.321 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:23.321 Command Effects Log Page: Supported 00:30:23.321 Get Log Page Extended Data: Supported 00:30:23.321 Telemetry Log Pages: Not Supported 00:30:23.321 Persistent Event Log Pages: Not Supported 00:30:23.321 Supported Log Pages Log Page: May Support 00:30:23.321 Commands Supported & Effects Log Page: Not Supported 00:30:23.321 Feature Identifiers & Effects Log Page:May Support 00:30:23.321 NVMe-MI Commands & Effects Log Page: May Support 00:30:23.321 Data Area 4 for Telemetry Log: Not Supported 00:30:23.321 Error Log Page Entries Supported: 128 00:30:23.321 Keep Alive: Supported 00:30:23.321 Keep Alive Granularity: 1000 ms 00:30:23.321 00:30:23.321 NVM Command Set Attributes 00:30:23.321 ========================== 00:30:23.321 Submission Queue Entry Size 00:30:23.321 Max: 64 00:30:23.321 Min: 64 00:30:23.321 Completion Queue Entry Size 00:30:23.321 Max: 16 00:30:23.321 Min: 16 00:30:23.321 Number of Namespaces: 1024 00:30:23.321 Compare Command: Not Supported 00:30:23.321 Write Uncorrectable Command: Not Supported 00:30:23.321 Dataset Management Command: Supported 00:30:23.321 Write Zeroes Command: Supported 00:30:23.321 Set Features Save Field: Not Supported 00:30:23.321 Reservations: Not Supported 00:30:23.321 Timestamp: Not Supported 00:30:23.321 Copy: Not Supported 00:30:23.321 Volatile Write Cache: Present 00:30:23.321 Atomic Write Unit (Normal): 1 00:30:23.321 Atomic Write Unit (PFail): 1 00:30:23.321 Atomic Compare & Write Unit: 1 00:30:23.321 Fused Compare & Write: Not Supported 00:30:23.321 Scatter-Gather List 00:30:23.321 SGL Command Set: Supported 00:30:23.321 SGL Keyed: Not Supported 00:30:23.321 SGL Bit Bucket Descriptor: Not Supported 00:30:23.321 SGL Metadata Pointer: Not Supported 00:30:23.321 Oversized SGL: Not Supported 00:30:23.321 SGL Metadata Address: Not Supported 00:30:23.321 SGL Offset: Supported 00:30:23.321 Transport SGL Data Block: Not Supported 00:30:23.321 Replay Protected Memory Block: Not Supported 00:30:23.321 00:30:23.321 Firmware Slot Information 00:30:23.321 ========================= 00:30:23.321 Active slot: 0 00:30:23.321 00:30:23.321 Asymmetric Namespace Access 00:30:23.321 =========================== 00:30:23.321 Change Count : 0 00:30:23.321 Number of ANA Group Descriptors : 1 00:30:23.321 ANA Group Descriptor : 0 00:30:23.321 ANA Group ID : 1 00:30:23.321 Number of NSID Values : 1 00:30:23.321 Change Count : 0 00:30:23.321 ANA State : 1 00:30:23.321 Namespace Identifier : 1 00:30:23.321 00:30:23.321 Commands Supported and Effects 00:30:23.321 ============================== 00:30:23.321 Admin Commands 00:30:23.321 -------------- 00:30:23.321 Get Log Page (02h): Supported 00:30:23.321 Identify (06h): Supported 00:30:23.321 Abort (08h): Supported 00:30:23.321 Set Features (09h): Supported 00:30:23.321 Get Features (0Ah): Supported 00:30:23.321 Asynchronous Event Request (0Ch): Supported 00:30:23.321 Keep Alive (18h): Supported 00:30:23.321 I/O Commands 00:30:23.321 ------------ 00:30:23.321 Flush (00h): Supported 00:30:23.321 Write (01h): Supported LBA-Change 00:30:23.321 Read (02h): Supported 00:30:23.321 Write Zeroes (08h): Supported LBA-Change 00:30:23.321 Dataset Management (09h): Supported 00:30:23.321 00:30:23.321 Error Log 00:30:23.321 ========= 00:30:23.321 Entry: 0 00:30:23.321 Error Count: 0x3 00:30:23.321 Submission Queue Id: 0x0 00:30:23.321 Command Id: 0x5 00:30:23.321 Phase Bit: 0 00:30:23.321 Status Code: 0x2 00:30:23.321 Status Code Type: 0x0 00:30:23.321 Do Not Retry: 1 00:30:23.321 Error Location: 0x28 00:30:23.321 LBA: 0x0 00:30:23.321 Namespace: 0x0 00:30:23.321 Vendor Log Page: 0x0 00:30:23.321 ----------- 00:30:23.321 Entry: 1 00:30:23.321 Error Count: 0x2 00:30:23.321 Submission Queue Id: 0x0 00:30:23.321 Command Id: 0x5 00:30:23.321 Phase Bit: 0 00:30:23.321 Status Code: 0x2 00:30:23.321 Status Code Type: 0x0 00:30:23.321 Do Not Retry: 1 00:30:23.321 Error Location: 0x28 00:30:23.321 LBA: 0x0 00:30:23.321 Namespace: 0x0 00:30:23.321 Vendor Log Page: 0x0 00:30:23.321 ----------- 00:30:23.321 Entry: 2 00:30:23.321 Error Count: 0x1 00:30:23.321 Submission Queue Id: 0x0 00:30:23.321 Command Id: 0x4 00:30:23.321 Phase Bit: 0 00:30:23.321 Status Code: 0x2 00:30:23.321 Status Code Type: 0x0 00:30:23.321 Do Not Retry: 1 00:30:23.321 Error Location: 0x28 00:30:23.321 LBA: 0x0 00:30:23.321 Namespace: 0x0 00:30:23.321 Vendor Log Page: 0x0 00:30:23.321 00:30:23.321 Number of Queues 00:30:23.321 ================ 00:30:23.321 Number of I/O Submission Queues: 128 00:30:23.321 Number of I/O Completion Queues: 128 00:30:23.321 00:30:23.321 ZNS Specific Controller Data 00:30:23.321 ============================ 00:30:23.321 Zone Append Size Limit: 0 00:30:23.321 00:30:23.321 00:30:23.321 Active Namespaces 00:30:23.321 ================= 00:30:23.321 get_feature(0x05) failed 00:30:23.321 Namespace ID:1 00:30:23.321 Command Set Identifier: NVM (00h) 00:30:23.321 Deallocate: Supported 00:30:23.321 Deallocated/Unwritten Error: Not Supported 00:30:23.321 Deallocated Read Value: Unknown 00:30:23.321 Deallocate in Write Zeroes: Not Supported 00:30:23.321 Deallocated Guard Field: 0xFFFF 00:30:23.321 Flush: Supported 00:30:23.321 Reservation: Not Supported 00:30:23.321 Namespace Sharing Capabilities: Multiple Controllers 00:30:23.321 Size (in LBAs): 3750748848 (1788GiB) 00:30:23.321 Capacity (in LBAs): 3750748848 (1788GiB) 00:30:23.321 Utilization (in LBAs): 3750748848 (1788GiB) 00:30:23.321 UUID: 613aaa30-dbb5-4c03-b8e1-c926ec5b0c01 00:30:23.321 Thin Provisioning: Not Supported 00:30:23.321 Per-NS Atomic Units: Yes 00:30:23.321 Atomic Write Unit (Normal): 8 00:30:23.321 Atomic Write Unit (PFail): 8 00:30:23.321 Preferred Write Granularity: 8 00:30:23.321 Atomic Compare & Write Unit: 8 00:30:23.321 Atomic Boundary Size (Normal): 0 00:30:23.321 Atomic Boundary Size (PFail): 0 00:30:23.321 Atomic Boundary Offset: 0 00:30:23.321 NGUID/EUI64 Never Reused: No 00:30:23.321 ANA group ID: 1 00:30:23.321 Namespace Write Protected: No 00:30:23.321 Number of LBA Formats: 1 00:30:23.321 Current LBA Format: LBA Format #00 00:30:23.321 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:23.321 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.321 rmmod nvme_tcp 00:30:23.321 rmmod nvme_fabrics 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.321 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.322 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.322 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.322 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.322 13:14:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.867 13:14:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.867 13:14:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:25.867 13:14:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:25.867 13:14:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:30:25.867 13:14:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:25.867 13:14:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:25.867 13:14:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:25.867 13:14:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:25.868 13:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:25.868 13:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:25.868 13:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:29.171 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:29.171 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:29.743 00:30:29.743 real 0m19.718s 00:30:29.743 user 0m5.404s 00:30:29.743 sys 0m11.345s 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:29.743 ************************************ 00:30:29.743 END TEST nvmf_identify_kernel_target 00:30:29.743 ************************************ 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.743 ************************************ 00:30:29.743 START TEST nvmf_auth_host 00:30:29.743 ************************************ 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:29.743 * Looking for test storage... 00:30:29.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:29.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.743 --rc genhtml_branch_coverage=1 00:30:29.743 --rc genhtml_function_coverage=1 00:30:29.743 --rc genhtml_legend=1 00:30:29.743 --rc geninfo_all_blocks=1 00:30:29.743 --rc geninfo_unexecuted_blocks=1 00:30:29.743 00:30:29.743 ' 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:29.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.743 --rc genhtml_branch_coverage=1 00:30:29.743 --rc genhtml_function_coverage=1 00:30:29.743 --rc genhtml_legend=1 00:30:29.743 --rc geninfo_all_blocks=1 00:30:29.743 --rc geninfo_unexecuted_blocks=1 00:30:29.743 00:30:29.743 ' 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:29.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.743 --rc genhtml_branch_coverage=1 00:30:29.743 --rc genhtml_function_coverage=1 00:30:29.743 --rc genhtml_legend=1 00:30:29.743 --rc geninfo_all_blocks=1 00:30:29.743 --rc geninfo_unexecuted_blocks=1 00:30:29.743 00:30:29.743 ' 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:29.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.743 --rc genhtml_branch_coverage=1 00:30:29.743 --rc genhtml_function_coverage=1 00:30:29.743 --rc genhtml_legend=1 00:30:29.743 --rc geninfo_all_blocks=1 00:30:29.743 --rc geninfo_unexecuted_blocks=1 00:30:29.743 00:30:29.743 ' 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.743 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.006 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:30.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.007 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:38.157 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:38.157 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:38.157 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:38.157 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.157 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:30:38.158 00:30:38.158 --- 10.0.0.2 ping statistics --- 00:30:38.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.158 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:30:38.158 00:30:38.158 --- 10.0.0.1 ping statistics --- 00:30:38.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.158 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1079809 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1079809 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1079809 ']' 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.158 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.158 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.158 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:30:38.158 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:38.158 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:38.158 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=810931d4ce1afab3f95dd881491f5b18 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vTa 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 810931d4ce1afab3f95dd881491f5b18 0 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 810931d4ce1afab3f95dd881491f5b18 0 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=810931d4ce1afab3f95dd881491f5b18 00:30:38.419 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vTa 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vTa 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.vTa 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f09e4ba7e36c87cc8d84e273484c4cd6487c152a08baa0beab1cbe9f2f985806 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lJw 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f09e4ba7e36c87cc8d84e273484c4cd6487c152a08baa0beab1cbe9f2f985806 3 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f09e4ba7e36c87cc8d84e273484c4cd6487c152a08baa0beab1cbe9f2f985806 3 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f09e4ba7e36c87cc8d84e273484c4cd6487c152a08baa0beab1cbe9f2f985806 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:30:38.420 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lJw 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lJw 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.lJw 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2adcf593b6ee9a848d078a012fef44aaab3b155eee10b4bb 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0kR 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2adcf593b6ee9a848d078a012fef44aaab3b155eee10b4bb 0 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2adcf593b6ee9a848d078a012fef44aaab3b155eee10b4bb 0 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2adcf593b6ee9a848d078a012fef44aaab3b155eee10b4bb 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0kR 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0kR 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.0kR 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:38.420 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=337d2c7a1fe9263164f39cbb62007751454ed40a143659f3 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vT9 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 337d2c7a1fe9263164f39cbb62007751454ed40a143659f3 2 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 337d2c7a1fe9263164f39cbb62007751454ed40a143659f3 2 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=337d2c7a1fe9263164f39cbb62007751454ed40a143659f3 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vT9 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vT9 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.vT9 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5ae244d546f558de05f1ae2d960fd531 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YwN 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5ae244d546f558de05f1ae2d960fd531 1 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5ae244d546f558de05f1ae2d960fd531 1 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:38.680 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5ae244d546f558de05f1ae2d960fd531 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YwN 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YwN 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.YwN 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=06109215bd8b31182060e9af8948670b 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UW7 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 06109215bd8b31182060e9af8948670b 1 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 06109215bd8b31182060e9af8948670b 1 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=06109215bd8b31182060e9af8948670b 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UW7 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UW7 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.UW7 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=12f6c3edc9f51232477d3c2a8fd7c3879704189c606a753a 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Egn 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 12f6c3edc9f51232477d3c2a8fd7c3879704189c606a753a 2 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 12f6c3edc9f51232477d3c2a8fd7c3879704189c606a753a 2 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=12f6c3edc9f51232477d3c2a8fd7c3879704189c606a753a 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:30:38.681 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Egn 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Egn 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Egn 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=10c3032fe06cdf5ef4c60c7c6a5527db 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DWf 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 10c3032fe06cdf5ef4c60c7c6a5527db 0 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 10c3032fe06cdf5ef4c60c7c6a5527db 0 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=10c3032fe06cdf5ef4c60c7c6a5527db 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DWf 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DWf 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.DWf 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c722ea564c45f12b0e344f15d9cca5c6793450e68427899db116950a85d26815 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2G2 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c722ea564c45f12b0e344f15d9cca5c6793450e68427899db116950a85d26815 3 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c722ea564c45f12b0e344f15d9cca5c6793450e68427899db116950a85d26815 3 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c722ea564c45f12b0e344f15d9cca5c6793450e68427899db116950a85d26815 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2G2 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2G2 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2G2 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1079809 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1079809 ']' 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.942 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vTa 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.lJw ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lJw 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.0kR 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.vT9 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vT9 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.YwN 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.UW7 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UW7 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Egn 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.DWf ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.DWf 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2G2 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:39.205 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:39.206 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:30:39.206 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:39.206 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:39.206 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:39.206 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:42.509 Waiting for block devices as requested 00:30:42.769 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:42.770 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:42.770 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:43.030 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:43.030 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:43.030 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:43.291 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:43.291 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:43.291 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:43.552 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:43.552 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:43.552 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:43.813 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:43.813 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:43.813 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:43.813 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:44.073 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:45.015 No valid GPT data, bailing 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:30:45.015 00:30:45.015 Discovery Log Number of Records 2, Generation counter 2 00:30:45.015 =====Discovery Log Entry 0====== 00:30:45.015 trtype: tcp 00:30:45.015 adrfam: ipv4 00:30:45.015 subtype: current discovery subsystem 00:30:45.015 treq: not specified, sq flow control disable supported 00:30:45.015 portid: 1 00:30:45.015 trsvcid: 4420 00:30:45.015 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:45.015 traddr: 10.0.0.1 00:30:45.015 eflags: none 00:30:45.015 sectype: none 00:30:45.015 =====Discovery Log Entry 1====== 00:30:45.015 trtype: tcp 00:30:45.015 adrfam: ipv4 00:30:45.015 subtype: nvme subsystem 00:30:45.015 treq: not specified, sq flow control disable supported 00:30:45.015 portid: 1 00:30:45.015 trsvcid: 4420 00:30:45.015 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:45.015 traddr: 10.0.0.1 00:30:45.015 eflags: none 00:30:45.015 sectype: none 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.015 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.016 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:45.016 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.016 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.276 nvme0n1 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.276 13:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.539 nvme0n1 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.539 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.801 nvme0n1 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.801 nvme0n1 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.801 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.063 nvme0n1 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.063 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.325 nvme0n1 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:46.325 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:30:46.326 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:46.326 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:46.326 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.326 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.326 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:46.326 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:46.326 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.326 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:46.326 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.326 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.587 nvme0n1 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.587 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.850 nvme0n1 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.850 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.113 nvme0n1 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.113 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.375 nvme0n1 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.375 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.375 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.375 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.375 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.375 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.375 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.636 nvme0n1 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.636 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.896 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.896 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:47.896 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:47.896 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:47.896 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:47.896 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:47.896 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.897 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.157 nvme0n1 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.158 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.420 nvme0n1 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:48.420 13:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.420 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.681 nvme0n1 00:30:48.681 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.681 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.681 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.681 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.681 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:48.682 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:48.943 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:48.943 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.943 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.943 nvme0n1 00:30:48.943 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.943 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:48.943 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:48.943 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.943 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.203 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.464 nvme0n1 00:30:49.464 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.464 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:49.464 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:49.464 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.464 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.464 13:14:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.464 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.036 nvme0n1 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.036 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.296 nvme0n1 00:30:50.296 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.296 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.296 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.296 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.296 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.296 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:50.557 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:50.558 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.558 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.818 nvme0n1 00:30:50.818 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.818 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:50.818 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:50.818 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.818 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.818 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.080 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.341 nvme0n1 00:30:51.341 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.341 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.341 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.341 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.341 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.341 13:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.341 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.341 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.341 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.341 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:51.602 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.603 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.865 nvme0n1 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.865 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.126 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.127 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.127 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:52.127 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.127 13:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.698 nvme0n1 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.698 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.270 nvme0n1 00:30:53.270 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.270 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.270 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.270 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.270 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.270 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.530 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.530 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.530 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.530 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.530 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.530 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.531 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.103 nvme0n1 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.103 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.104 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.104 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:54.104 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.104 13:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.048 nvme0n1 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.048 13:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.622 nvme0n1 00:30:55.622 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.622 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.622 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.622 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.622 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.622 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.622 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.622 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.622 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.623 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.884 nvme0n1 00:30:55.884 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.884 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.885 nvme0n1 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.885 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.148 nvme0n1 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.148 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.412 13:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.412 nvme0n1 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.412 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.674 nvme0n1 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.674 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.675 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.937 nvme0n1 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.937 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.198 nvme0n1 00:30:57.198 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.198 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.198 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.198 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.198 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.198 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.198 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.199 13:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.461 nvme0n1 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.461 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.723 nvme0n1 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:57.723 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.724 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.985 nvme0n1 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:57.985 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.986 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.247 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.247 nvme0n1 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.509 13:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.509 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.769 nvme0n1 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:58.769 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.770 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.031 nvme0n1 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.031 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.032 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.293 nvme0n1 00:30:59.293 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.293 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.293 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.293 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.293 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.293 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.554 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.554 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.554 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.554 13:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.554 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:59.555 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.555 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.815 nvme0n1 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:30:59.815 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.816 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.387 nvme0n1 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:00.387 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.388 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.388 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:00.388 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.388 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:00.388 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:00.388 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:00.388 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:00.388 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.388 13:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.648 nvme0n1 00:31:00.648 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.648 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.648 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.648 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.648 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.648 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.909 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.170 nvme0n1 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.170 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.431 13:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.694 nvme0n1 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.694 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.955 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.216 nvme0n1 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.216 13:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.157 nvme0n1 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:03.157 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.158 13:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.729 nvme0n1 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.729 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.730 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 nvme0n1 00:31:04.363 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.363 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.363 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.363 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.363 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.363 13:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.686 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.687 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.343 nvme0n1 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.343 13:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.917 nvme0n1 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.917 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.179 nvme0n1 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.179 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.180 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.440 nvme0n1 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:06.440 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.441 13:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.441 nvme0n1 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.702 nvme0n1 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.702 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:06.963 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.964 nvme0n1 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.964 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.225 nvme0n1 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.225 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.486 13:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.486 nvme0n1 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.486 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.747 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.747 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.748 nvme0n1 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.748 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.009 nvme0n1 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.009 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.270 nvme0n1 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.270 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.532 13:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.793 nvme0n1 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.793 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.055 nvme0n1 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.055 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.315 nvme0n1 00:31:09.315 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.315 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.315 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.315 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.315 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.316 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.316 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.316 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.316 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.316 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.576 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.576 13:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.576 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.836 nvme0n1 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.836 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.097 nvme0n1 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.097 13:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.667 nvme0n1 00:31:10.667 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.667 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.667 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.667 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.667 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.667 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.667 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.668 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.238 nvme0n1 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.238 13:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.498 nvme0n1 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.498 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.759 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.021 nvme0n1 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.021 13:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.592 nvme0n1 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEwOTMxZDRjZTFhZmFiM2Y5NWRkODgxNDkxZjViMThsc/bj: 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: ]] 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5ZTRiYTdlMzZjODdjYzhkODRlMjczNDg0YzRjZDY0ODdjMTUyYTA4YmFhMGJlYWIxY2JlOWYyZjk4NTgwNkW/yek=: 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.592 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.163 nvme0n1 00:31:13.163 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.163 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.163 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.163 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.163 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.424 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.425 13:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.995 nvme0n1 00:31:13.995 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.995 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.995 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.995 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.996 13:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.936 nvme0n1 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTJmNmMzZWRjOWY1MTIzMjQ3N2QzYzJhOGZkN2MzODc5NzA0MTg5YzYwNmE3NTNhx92E3Q==: 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: ]] 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTBjMzAzMmZlMDZjZGY1ZWY0YzYwYzdjNmE1NTI3ZGK5upt+: 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.936 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.507 nvme0n1 00:31:15.507 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.507 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.507 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.507 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.507 13:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzcyMmVhNTY0YzQ1ZjEyYjBlMzQ0ZjE1ZDljY2E1YzY3OTM0NTBlNjg0Mjc4OTlkYjExNjk1MGE4NWQyNjgxNVg05WQ=: 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:15.507 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.508 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.078 nvme0n1 00:31:16.078 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.078 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.078 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.078 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.078 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.078 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.339 request: 00:31:16.339 { 00:31:16.339 "name": "nvme0", 00:31:16.339 "trtype": "tcp", 00:31:16.339 "traddr": "10.0.0.1", 00:31:16.339 "adrfam": "ipv4", 00:31:16.339 "trsvcid": "4420", 00:31:16.339 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:16.339 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:16.339 "prchk_reftag": false, 00:31:16.339 "prchk_guard": false, 00:31:16.339 "hdgst": false, 00:31:16.339 "ddgst": false, 00:31:16.339 "allow_unrecognized_csi": false, 00:31:16.339 "method": "bdev_nvme_attach_controller", 00:31:16.339 "req_id": 1 00:31:16.339 } 00:31:16.339 Got JSON-RPC error response 00:31:16.339 response: 00:31:16.339 { 00:31:16.339 "code": -5, 00:31:16.339 "message": "Input/output error" 00:31:16.339 } 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:16.339 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.340 request: 00:31:16.340 { 00:31:16.340 "name": "nvme0", 00:31:16.340 "trtype": "tcp", 00:31:16.340 "traddr": "10.0.0.1", 00:31:16.340 "adrfam": "ipv4", 00:31:16.340 "trsvcid": "4420", 00:31:16.340 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:16.340 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:16.340 "prchk_reftag": false, 00:31:16.340 "prchk_guard": false, 00:31:16.340 "hdgst": false, 00:31:16.340 "ddgst": false, 00:31:16.340 "dhchap_key": "key2", 00:31:16.340 "allow_unrecognized_csi": false, 00:31:16.340 "method": "bdev_nvme_attach_controller", 00:31:16.340 "req_id": 1 00:31:16.340 } 00:31:16.340 Got JSON-RPC error response 00:31:16.340 response: 00:31:16.340 { 00:31:16.340 "code": -5, 00:31:16.340 "message": "Input/output error" 00:31:16.340 } 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.340 13:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.340 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.601 request: 00:31:16.601 { 00:31:16.601 "name": "nvme0", 00:31:16.601 "trtype": "tcp", 00:31:16.601 "traddr": "10.0.0.1", 00:31:16.601 "adrfam": "ipv4", 00:31:16.601 "trsvcid": "4420", 00:31:16.601 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:16.601 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:16.601 "prchk_reftag": false, 00:31:16.601 "prchk_guard": false, 00:31:16.601 "hdgst": false, 00:31:16.601 "ddgst": false, 00:31:16.601 "dhchap_key": "key1", 00:31:16.601 "dhchap_ctrlr_key": "ckey2", 00:31:16.601 "allow_unrecognized_csi": false, 00:31:16.601 "method": "bdev_nvme_attach_controller", 00:31:16.601 "req_id": 1 00:31:16.601 } 00:31:16.601 Got JSON-RPC error response 00:31:16.601 response: 00:31:16.601 { 00:31:16.601 "code": -5, 00:31:16.601 "message": "Input/output error" 00:31:16.601 } 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.601 nvme0n1 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:16.601 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.602 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.602 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:16.602 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:31:16.602 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:16.602 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:16.602 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.602 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.862 request: 00:31:16.862 { 00:31:16.862 "name": "nvme0", 00:31:16.862 "dhchap_key": "key1", 00:31:16.862 "dhchap_ctrlr_key": "ckey2", 00:31:16.862 "method": "bdev_nvme_set_keys", 00:31:16.862 "req_id": 1 00:31:16.862 } 00:31:16.862 Got JSON-RPC error response 00:31:16.862 response: 00:31:16.862 { 00:31:16.862 "code": -13, 00:31:16.862 "message": "Permission denied" 00:31:16.862 } 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:16.862 13:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:18.243 13:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.243 13:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:18.243 13:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.243 13:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.243 13:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.243 13:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:18.243 13:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFkY2Y1OTNiNmVlOWE4NDhkMDc4YTAxMmZlZjQ0YWFhYjNiMTU1ZWVlMTBiNGJiQOLSYA==: 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: ]] 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzM3ZDJjN2ExZmU5MjYzMTY0ZjM5Y2JiNjIwMDc3NTE0NTRlZDQwYTE0MzY1OWYz4FeXEg==: 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.183 nvme0n1 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWFlMjQ0ZDU0NmY1NThkZTA1ZjFhZTJkOTYwZmQ1MzGIOwur: 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: ]] 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYxMDkyMTViZDhiMzExODIwNjBlOWFmODk0ODY3MGI44bx6: 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.183 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.183 request: 00:31:19.183 { 00:31:19.183 "name": "nvme0", 00:31:19.183 "dhchap_key": "key2", 00:31:19.183 "dhchap_ctrlr_key": "ckey1", 00:31:19.183 "method": "bdev_nvme_set_keys", 00:31:19.183 "req_id": 1 00:31:19.183 } 00:31:19.183 Got JSON-RPC error response 00:31:19.183 response: 00:31:19.183 { 00:31:19.183 "code": -13, 00:31:19.183 "message": "Permission denied" 00:31:19.183 } 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.184 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.444 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:31:19.444 13:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.384 rmmod nvme_tcp 00:31:20.384 rmmod nvme_fabrics 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1079809 ']' 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1079809 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1079809 ']' 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1079809 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:20.384 13:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1079809 00:31:20.384 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:20.384 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:20.384 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1079809' 00:31:20.384 killing process with pid 1079809 00:31:20.384 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1079809 00:31:20.384 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1079809 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.644 13:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.554 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.554 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:22.554 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:22.554 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:22.555 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:22.555 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:31:22.814 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:22.814 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:22.814 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:22.814 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:22.814 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:22.814 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:22.814 13:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:26.110 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:26.370 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:26.939 13:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.vTa /tmp/spdk.key-null.0kR /tmp/spdk.key-sha256.YwN /tmp/spdk.key-sha384.Egn /tmp/spdk.key-sha512.2G2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:31:26.939 13:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:30.238 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:30.238 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:30.238 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:30.809 00:31:30.809 real 1m1.033s 00:31:30.809 user 0m54.704s 00:31:30.809 sys 0m16.225s 00:31:30.809 13:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.809 13:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.809 ************************************ 00:31:30.809 END TEST nvmf_auth_host 00:31:30.809 ************************************ 00:31:30.809 13:15:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:31:30.809 13:15:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:30.809 13:15:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:30.809 13:15:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.809 13:15:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.809 ************************************ 00:31:30.809 START TEST nvmf_digest 00:31:30.809 ************************************ 00:31:30.809 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:30.809 * Looking for test storage... 00:31:30.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:30.810 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:30.810 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:31:30.810 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:31.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.071 --rc genhtml_branch_coverage=1 00:31:31.071 --rc genhtml_function_coverage=1 00:31:31.071 --rc genhtml_legend=1 00:31:31.071 --rc geninfo_all_blocks=1 00:31:31.071 --rc geninfo_unexecuted_blocks=1 00:31:31.071 00:31:31.071 ' 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:31.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.071 --rc genhtml_branch_coverage=1 00:31:31.071 --rc genhtml_function_coverage=1 00:31:31.071 --rc genhtml_legend=1 00:31:31.071 --rc geninfo_all_blocks=1 00:31:31.071 --rc geninfo_unexecuted_blocks=1 00:31:31.071 00:31:31.071 ' 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:31.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.071 --rc genhtml_branch_coverage=1 00:31:31.071 --rc genhtml_function_coverage=1 00:31:31.071 --rc genhtml_legend=1 00:31:31.071 --rc geninfo_all_blocks=1 00:31:31.071 --rc geninfo_unexecuted_blocks=1 00:31:31.071 00:31:31.071 ' 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:31.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.071 --rc genhtml_branch_coverage=1 00:31:31.071 --rc genhtml_function_coverage=1 00:31:31.071 --rc genhtml_legend=1 00:31:31.071 --rc geninfo_all_blocks=1 00:31:31.071 --rc geninfo_unexecuted_blocks=1 00:31:31.071 00:31:31.071 ' 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:31.071 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:31.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:31:31.072 13:15:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.211 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:39.212 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:39.212 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:39.212 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:39.212 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:39.212 13:15:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.212 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.212 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.212 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:39.212 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:39.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:31:39.212 00:31:39.212 --- 10.0.0.2 ping statistics --- 00:31:39.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.212 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:31:39.212 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:31:39.212 00:31:39.212 --- 10.0.0.1 ping statistics --- 00:31:39.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.212 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:31:39.212 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:39.213 ************************************ 00:31:39.213 START TEST nvmf_digest_clean 00:31:39.213 ************************************ 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1097367 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1097367 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1097367 ']' 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.213 13:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:39.213 [2024-11-29 13:15:41.212905] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:31:39.213 [2024-11-29 13:15:41.212967] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.213 [2024-11-29 13:15:41.314501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.213 [2024-11-29 13:15:41.365496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.213 [2024-11-29 13:15:41.365549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.213 [2024-11-29 13:15:41.365558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.213 [2024-11-29 13:15:41.365565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.213 [2024-11-29 13:15:41.365572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.213 [2024-11-29 13:15:41.366355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.474 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:39.734 null0 00:31:39.734 [2024-11-29 13:15:42.186441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.734 [2024-11-29 13:15:42.210788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.734 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.734 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1097615 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1097615 /var/tmp/bperf.sock 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1097615 ']' 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:39.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.735 13:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:39.735 [2024-11-29 13:15:42.271014] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:31:39.735 [2024-11-29 13:15:42.271077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1097615 ] 00:31:39.735 [2024-11-29 13:15:42.361683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.995 [2024-11-29 13:15:42.415681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.566 13:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.566 13:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:40.566 13:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:40.566 13:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:40.566 13:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:40.827 13:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:40.827 13:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:41.088 nvme0n1 00:31:41.088 13:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:41.088 13:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:41.088 Running I/O for 2 seconds... 00:31:43.420 18125.00 IOPS, 70.80 MiB/s [2024-11-29T12:15:46.100Z] 18990.00 IOPS, 74.18 MiB/s 00:31:43.420 Latency(us) 00:31:43.420 [2024-11-29T12:15:46.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.420 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:43.420 nvme0n1 : 2.04 18641.40 72.82 0.00 0.00 6728.13 3126.61 46093.65 00:31:43.420 [2024-11-29T12:15:46.100Z] =================================================================================================================== 00:31:43.420 [2024-11-29T12:15:46.100Z] Total : 18641.40 72.82 0.00 0.00 6728.13 3126.61 46093.65 00:31:43.420 { 00:31:43.420 "results": [ 00:31:43.420 { 00:31:43.420 "job": "nvme0n1", 00:31:43.420 "core_mask": "0x2", 00:31:43.420 "workload": "randread", 00:31:43.420 "status": "finished", 00:31:43.420 "queue_depth": 128, 00:31:43.420 "io_size": 4096, 00:31:43.420 "runtime": 2.044267, 00:31:43.420 "iops": 18641.40056068997, 00:31:43.420 "mibps": 72.8179709401952, 00:31:43.420 "io_failed": 0, 00:31:43.420 "io_timeout": 0, 00:31:43.420 "avg_latency_us": 6728.132149329975, 00:31:43.420 "min_latency_us": 3126.6133333333332, 00:31:43.420 "max_latency_us": 46093.653333333335 00:31:43.420 } 00:31:43.420 ], 00:31:43.420 "core_count": 1 00:31:43.420 } 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:43.420 | select(.opcode=="crc32c") 00:31:43.420 | "\(.module_name) \(.executed)"' 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1097615 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1097615 ']' 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1097615 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.420 13:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1097615 00:31:43.420 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:43.420 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:43.420 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1097615' 00:31:43.420 killing process with pid 1097615 00:31:43.420 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1097615 00:31:43.420 Received shutdown signal, test time was about 2.000000 seconds 00:31:43.420 00:31:43.420 Latency(us) 00:31:43.420 [2024-11-29T12:15:46.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.420 [2024-11-29T12:15:46.100Z] =================================================================================================================== 00:31:43.420 [2024-11-29T12:15:46.100Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:43.420 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1097615 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1098399 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1098399 /var/tmp/bperf.sock 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1098399 ']' 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:43.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:43.682 13:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:43.682 [2024-11-29 13:15:46.208602] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:31:43.682 [2024-11-29 13:15:46.208660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098399 ] 00:31:43.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:43.682 Zero copy mechanism will not be used. 00:31:43.682 [2024-11-29 13:15:46.296387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.682 [2024-11-29 13:15:46.332020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.384 13:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:44.384 13:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:44.384 13:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:44.384 13:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:44.384 13:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:44.654 13:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:44.654 13:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:44.915 nvme0n1 00:31:44.915 13:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:44.915 13:15:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:44.915 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:44.915 Zero copy mechanism will not be used. 00:31:44.915 Running I/O for 2 seconds... 00:31:47.238 3902.00 IOPS, 487.75 MiB/s [2024-11-29T12:15:49.918Z] 3484.00 IOPS, 435.50 MiB/s 00:31:47.238 Latency(us) 00:31:47.238 [2024-11-29T12:15:49.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.238 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:47.238 nvme0n1 : 2.00 3487.90 435.99 0.00 0.00 4583.70 604.16 7591.25 00:31:47.238 [2024-11-29T12:15:49.918Z] =================================================================================================================== 00:31:47.238 [2024-11-29T12:15:49.918Z] Total : 3487.90 435.99 0.00 0.00 4583.70 604.16 7591.25 00:31:47.238 { 00:31:47.238 "results": [ 00:31:47.238 { 00:31:47.238 "job": "nvme0n1", 00:31:47.238 "core_mask": "0x2", 00:31:47.238 "workload": "randread", 00:31:47.238 "status": "finished", 00:31:47.238 "queue_depth": 16, 00:31:47.238 "io_size": 131072, 00:31:47.238 "runtime": 2.002349, 00:31:47.238 "iops": 3487.9034573892964, 00:31:47.238 "mibps": 435.98793217366205, 00:31:47.238 "io_failed": 0, 00:31:47.238 "io_timeout": 0, 00:31:47.238 "avg_latency_us": 4583.704681176021, 00:31:47.238 "min_latency_us": 604.16, 00:31:47.238 "max_latency_us": 7591.253333333333 00:31:47.238 } 00:31:47.238 ], 00:31:47.238 "core_count": 1 00:31:47.238 } 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:47.238 | select(.opcode=="crc32c") 00:31:47.238 | "\(.module_name) \(.executed)"' 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1098399 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1098399 ']' 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1098399 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1098399 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1098399' 00:31:47.238 killing process with pid 1098399 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1098399 00:31:47.238 Received shutdown signal, test time was about 2.000000 seconds 00:31:47.238 00:31:47.238 Latency(us) 00:31:47.238 [2024-11-29T12:15:49.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.238 [2024-11-29T12:15:49.918Z] =================================================================================================================== 00:31:47.238 [2024-11-29T12:15:49.918Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:47.238 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1098399 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1099090 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1099090 /var/tmp/bperf.sock 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1099090 ']' 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:47.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.499 13:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:47.499 [2024-11-29 13:15:49.985055] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:31:47.499 [2024-11-29 13:15:49.985115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099090 ] 00:31:47.499 [2024-11-29 13:15:50.068405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.499 [2024-11-29 13:15:50.097861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.439 13:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.439 13:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:48.439 13:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:48.439 13:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:48.439 13:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:48.439 13:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:48.439 13:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:48.698 nvme0n1 00:31:48.698 13:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:48.698 13:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:48.958 Running I/O for 2 seconds... 00:31:50.837 30015.00 IOPS, 117.25 MiB/s [2024-11-29T12:15:53.517Z] 30217.00 IOPS, 118.04 MiB/s 00:31:50.837 Latency(us) 00:31:50.837 [2024-11-29T12:15:53.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.837 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.837 nvme0n1 : 2.01 30219.59 118.05 0.00 0.00 4230.07 2157.23 15073.28 00:31:50.837 [2024-11-29T12:15:53.517Z] =================================================================================================================== 00:31:50.837 [2024-11-29T12:15:53.517Z] Total : 30219.59 118.05 0.00 0.00 4230.07 2157.23 15073.28 00:31:50.837 { 00:31:50.837 "results": [ 00:31:50.837 { 00:31:50.837 "job": "nvme0n1", 00:31:50.837 "core_mask": "0x2", 00:31:50.837 "workload": "randwrite", 00:31:50.837 "status": "finished", 00:31:50.837 "queue_depth": 128, 00:31:50.837 "io_size": 4096, 00:31:50.837 "runtime": 2.006149, 00:31:50.837 "iops": 30219.58987094179, 00:31:50.837 "mibps": 118.04527293336636, 00:31:50.837 "io_failed": 0, 00:31:50.837 "io_timeout": 0, 00:31:50.837 "avg_latency_us": 4230.071679560137, 00:31:50.837 "min_latency_us": 2157.2266666666665, 00:31:50.837 "max_latency_us": 15073.28 00:31:50.837 } 00:31:50.837 ], 00:31:50.837 "core_count": 1 00:31:50.837 } 00:31:50.837 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:50.837 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:50.837 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:50.837 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:50.837 | select(.opcode=="crc32c") 00:31:50.837 | "\(.module_name) \(.executed)"' 00:31:50.837 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1099090 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1099090 ']' 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1099090 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1099090 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1099090' 00:31:51.096 killing process with pid 1099090 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1099090 00:31:51.096 Received shutdown signal, test time was about 2.000000 seconds 00:31:51.096 00:31:51.096 Latency(us) 00:31:51.096 [2024-11-29T12:15:53.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.096 [2024-11-29T12:15:53.776Z] =================================================================================================================== 00:31:51.096 [2024-11-29T12:15:53.776Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:51.096 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1099090 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1099772 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1099772 /var/tmp/bperf.sock 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1099772 ']' 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:51.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:51.356 13:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:51.356 [2024-11-29 13:15:53.886220] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:31:51.356 [2024-11-29 13:15:53.886278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099772 ] 00:31:51.356 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:51.356 Zero copy mechanism will not be used. 00:31:51.356 [2024-11-29 13:15:53.967838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.356 [2024-11-29 13:15:53.997310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.297 13:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:52.297 13:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:52.297 13:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:52.297 13:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:52.297 13:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:52.297 13:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:52.297 13:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:52.557 nvme0n1 00:31:52.557 13:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:52.557 13:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:52.557 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:52.557 Zero copy mechanism will not be used. 00:31:52.557 Running I/O for 2 seconds... 00:31:54.882 5615.00 IOPS, 701.88 MiB/s [2024-11-29T12:15:57.562Z] 5371.50 IOPS, 671.44 MiB/s 00:31:54.882 Latency(us) 00:31:54.882 [2024-11-29T12:15:57.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.882 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:54.882 nvme0n1 : 2.01 5367.35 670.92 0.00 0.00 2975.99 1242.45 10594.99 00:31:54.882 [2024-11-29T12:15:57.562Z] =================================================================================================================== 00:31:54.882 [2024-11-29T12:15:57.562Z] Total : 5367.35 670.92 0.00 0.00 2975.99 1242.45 10594.99 00:31:54.882 { 00:31:54.882 "results": [ 00:31:54.882 { 00:31:54.882 "job": "nvme0n1", 00:31:54.882 "core_mask": "0x2", 00:31:54.882 "workload": "randwrite", 00:31:54.882 "status": "finished", 00:31:54.882 "queue_depth": 16, 00:31:54.882 "io_size": 131072, 00:31:54.882 "runtime": 2.005272, 00:31:54.882 "iops": 5367.351661021547, 00:31:54.882 "mibps": 670.9189576276934, 00:31:54.882 "io_failed": 0, 00:31:54.882 "io_timeout": 0, 00:31:54.882 "avg_latency_us": 2975.9852135402148, 00:31:54.882 "min_latency_us": 1242.4533333333334, 00:31:54.882 "max_latency_us": 10594.986666666666 00:31:54.882 } 00:31:54.882 ], 00:31:54.882 "core_count": 1 00:31:54.882 } 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:54.882 | select(.opcode=="crc32c") 00:31:54.882 | "\(.module_name) \(.executed)"' 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1099772 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1099772 ']' 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1099772 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1099772 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1099772' 00:31:54.882 killing process with pid 1099772 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1099772 00:31:54.882 Received shutdown signal, test time was about 2.000000 seconds 00:31:54.882 00:31:54.882 Latency(us) 00:31:54.882 [2024-11-29T12:15:57.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.882 [2024-11-29T12:15:57.562Z] =================================================================================================================== 00:31:54.882 [2024-11-29T12:15:57.562Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:54.882 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1099772 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1097367 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1097367 ']' 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1097367 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1097367 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1097367' 00:31:55.143 killing process with pid 1097367 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1097367 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1097367 00:31:55.143 00:31:55.143 real 0m16.639s 00:31:55.143 user 0m32.893s 00:31:55.143 sys 0m3.662s 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.143 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:55.143 ************************************ 00:31:55.143 END TEST nvmf_digest_clean 00:31:55.143 ************************************ 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:55.404 ************************************ 00:31:55.404 START TEST nvmf_digest_error 00:31:55.404 ************************************ 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1100607 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1100607 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1100607 ']' 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.404 13:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:55.404 [2024-11-29 13:15:57.929191] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:31:55.404 [2024-11-29 13:15:57.929248] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.404 [2024-11-29 13:15:58.021879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.404 [2024-11-29 13:15:58.056826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.404 [2024-11-29 13:15:58.056856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.404 [2024-11-29 13:15:58.056862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.404 [2024-11-29 13:15:58.056867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.404 [2024-11-29 13:15:58.056871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.404 [2024-11-29 13:15:58.057357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.345 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.345 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:56.345 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.345 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.345 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:56.345 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.345 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:56.345 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.345 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:56.345 [2024-11-29 13:15:58.759293] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:56.346 null0 00:31:56.346 [2024-11-29 13:15:58.838003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.346 [2024-11-29 13:15:58.862222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1100831 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1100831 /var/tmp/bperf.sock 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1100831 ']' 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:56.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.346 13:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:56.346 [2024-11-29 13:15:58.917072] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:31:56.346 [2024-11-29 13:15:58.917125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100831 ] 00:31:56.346 [2024-11-29 13:15:59.001223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.606 [2024-11-29 13:15:59.030879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.177 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.177 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:57.177 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:57.177 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:57.438 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:57.438 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.438 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:57.438 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.438 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:57.438 13:15:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:57.699 nvme0n1 00:31:57.699 13:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:57.699 13:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.699 13:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:57.699 13:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.699 13:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:57.699 13:16:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:57.699 Running I/O for 2 seconds... 00:31:57.699 [2024-11-29 13:16:00.348274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.699 [2024-11-29 13:16:00.348304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.699 [2024-11-29 13:16:00.348313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.699 [2024-11-29 13:16:00.359859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.699 [2024-11-29 13:16:00.359880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.699 [2024-11-29 13:16:00.359887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.699 [2024-11-29 13:16:00.368371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.699 [2024-11-29 13:16:00.368389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.699 [2024-11-29 13:16:00.368400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.960 [2024-11-29 13:16:00.379306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.960 [2024-11-29 13:16:00.379325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.960 [2024-11-29 13:16:00.379331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.960 [2024-11-29 13:16:00.390223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.960 [2024-11-29 13:16:00.390241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.960 [2024-11-29 13:16:00.390248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.960 [2024-11-29 13:16:00.402479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.960 [2024-11-29 13:16:00.402497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.960 [2024-11-29 13:16:00.402503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.960 [2024-11-29 13:16:00.411018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.960 [2024-11-29 13:16:00.411036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.960 [2024-11-29 13:16:00.411043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.960 [2024-11-29 13:16:00.420903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.960 [2024-11-29 13:16:00.420921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.960 [2024-11-29 13:16:00.420927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.960 [2024-11-29 13:16:00.430062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.960 [2024-11-29 13:16:00.430080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.960 [2024-11-29 13:16:00.430087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.960 [2024-11-29 13:16:00.438827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.960 [2024-11-29 13:16:00.438844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.960 [2024-11-29 13:16:00.438851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.960 [2024-11-29 13:16:00.448662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.960 [2024-11-29 13:16:00.448679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.448685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.457957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.457980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.457986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.467407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.467424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.467430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.476398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.476415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.476422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.486027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.486044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.486050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.494359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.494376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.494382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.503802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.503819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.503825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.514105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.514122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.514129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.525631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.525648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.525655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.533605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.533621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.533627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.544836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.544853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.544859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.556810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.556827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.556833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.564941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.564958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.564964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.576095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.576112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.576118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.585828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.585845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.585851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.594921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.594938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.594945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.605268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.605285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.605291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.614870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.614887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.614893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.622282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.622299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.622308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.961 [2024-11-29 13:16:00.633799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:57.961 [2024-11-29 13:16:00.633816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.961 [2024-11-29 13:16:00.633822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.223 [2024-11-29 13:16:00.643166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.223 [2024-11-29 13:16:00.643183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.223 [2024-11-29 13:16:00.643190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.223 [2024-11-29 13:16:00.653162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.223 [2024-11-29 13:16:00.653178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.223 [2024-11-29 13:16:00.653185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.223 [2024-11-29 13:16:00.664770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.223 [2024-11-29 13:16:00.664787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.223 [2024-11-29 13:16:00.664793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.223 [2024-11-29 13:16:00.675541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.223 [2024-11-29 13:16:00.675557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.223 [2024-11-29 13:16:00.675564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.223 [2024-11-29 13:16:00.684981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.223 [2024-11-29 13:16:00.684997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.223 [2024-11-29 13:16:00.685003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.223 [2024-11-29 13:16:00.693921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.223 [2024-11-29 13:16:00.693938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.223 [2024-11-29 13:16:00.693944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.223 [2024-11-29 13:16:00.705869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.223 [2024-11-29 13:16:00.705885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.223 [2024-11-29 13:16:00.705891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.223 [2024-11-29 13:16:00.716026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.223 [2024-11-29 13:16:00.716042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.223 [2024-11-29 13:16:00.716048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.223 [2024-11-29 13:16:00.724510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.223 [2024-11-29 13:16:00.724526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.724532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.734800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.734817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.734823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.744817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.744834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.744840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.754303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.754320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.754326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.763231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.763247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.763253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.772773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.772790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.772796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.784803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.784820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.784826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.794424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.794439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.794449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.802115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.802130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.802137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.812900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.812917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.812923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.821561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.821579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.821585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.831177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.831193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.831199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.840001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.840017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.840023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.848246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.848263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.848269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.857165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.857181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.857188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.867551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.867568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.867575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.875414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.875434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.875440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.886708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.886726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.886732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.224 [2024-11-29 13:16:00.895969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.224 [2024-11-29 13:16:00.895985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.224 [2024-11-29 13:16:00.895991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.904979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.904997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.905003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.914542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.914559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.914566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.922477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.922493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.922500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.931344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.931360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.931367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.941415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.941432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.941438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.950881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.950897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.950903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.958022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.958039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.958045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.969309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.969325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.969332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.979083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.979099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.979106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.987803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.485 [2024-11-29 13:16:00.987819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.485 [2024-11-29 13:16:00.987825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.485 [2024-11-29 13:16:00.997592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:00.997608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:00.997615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.007542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.007558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.007565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.016355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.016371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.016377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.024803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.024819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.024826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.034707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.034724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.034733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.043377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.043393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.043399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.052667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.052684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.052690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.062797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.062814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.062820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.070842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.070859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.070865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.081574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.081591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.081597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.092243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.092261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.092267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.101469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.101485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.101492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.109916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.109933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.109940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.121396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.121416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.121423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.132717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.132734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.132741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.141099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.141116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.141122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.151547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.151564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.151570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.486 [2024-11-29 13:16:01.160548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.486 [2024-11-29 13:16:01.160566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.486 [2024-11-29 13:16:01.160572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.747 [2024-11-29 13:16:01.170547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.747 [2024-11-29 13:16:01.170565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.747 [2024-11-29 13:16:01.170571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.747 [2024-11-29 13:16:01.181266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.747 [2024-11-29 13:16:01.181283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.747 [2024-11-29 13:16:01.181290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.747 [2024-11-29 13:16:01.189893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.747 [2024-11-29 13:16:01.189910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.747 [2024-11-29 13:16:01.189917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.747 [2024-11-29 13:16:01.198447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.747 [2024-11-29 13:16:01.198465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.747 [2024-11-29 13:16:01.198471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.747 [2024-11-29 13:16:01.207276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.747 [2024-11-29 13:16:01.207294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.747 [2024-11-29 13:16:01.207300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.747 [2024-11-29 13:16:01.217372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.747 [2024-11-29 13:16:01.217389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.747 [2024-11-29 13:16:01.217396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.747 [2024-11-29 13:16:01.225750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.747 [2024-11-29 13:16:01.225767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.747 [2024-11-29 13:16:01.225774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.747 [2024-11-29 13:16:01.237951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.747 [2024-11-29 13:16:01.237968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.747 [2024-11-29 13:16:01.237974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.747 [2024-11-29 13:16:01.249915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.249931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.249938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.257679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.257697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.257703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.269286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.269304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.269310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.280641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.280659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.280666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.289423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.289443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.289450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.298741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.298758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.298764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.307694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.307711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.307717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.316885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.316902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.316908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.324484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.324501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.324507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 26121.00 IOPS, 102.04 MiB/s [2024-11-29T12:16:01.428Z] [2024-11-29 13:16:01.332960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.332978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.332984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.342259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.342276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.342283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.351587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.351604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.351610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.361506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.361523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.361529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.370983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.371000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.371006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.378742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.378760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.378766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.387784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.387802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.387808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.398148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.398169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.398175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.411660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.411677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.411684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.748 [2024-11-29 13:16:01.419588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:58.748 [2024-11-29 13:16:01.419604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.748 [2024-11-29 13:16:01.419611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.431190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.431207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.431214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.440326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.440343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.440350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.449695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.449712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.449722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.458561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.458578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.458584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.466575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.466592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.466598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.475969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.475986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.475992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.488236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.488254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.488261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.499778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.499796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.499802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.508757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.508774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.508780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.517868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.517884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.517891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.009 [2024-11-29 13:16:01.527157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.009 [2024-11-29 13:16:01.527177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.009 [2024-11-29 13:16:01.527183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.535787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.535806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.535813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.543972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.543990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.543996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.553832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.553850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.553856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.565514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.565531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.565537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.576037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.576054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.576060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.583958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.583975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.583981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.594049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.594066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.594072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.605829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.605846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.605852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.616994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.617011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.617021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.625905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.625922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.625929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.633606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.633624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.633630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.642604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.642621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.642627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.654187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.654204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.654210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.664562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.664578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.664584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.672941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.672957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.672963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.010 [2024-11-29 13:16:01.681204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.010 [2024-11-29 13:16:01.681221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.010 [2024-11-29 13:16:01.681227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.690430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.690447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-11-29 13:16:01.690453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.700082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.700106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-11-29 13:16:01.700112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.711327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.711344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-11-29 13:16:01.711350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.720073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.720090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-11-29 13:16:01.720096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.728619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.728636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-11-29 13:16:01.728642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.737013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.737030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-11-29 13:16:01.737036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.746575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.746592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-11-29 13:16:01.746598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.757606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.757624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-11-29 13:16:01.757630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.768030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.768047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-11-29 13:16:01.768052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-11-29 13:16:01.776296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.271 [2024-11-29 13:16:01.776312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.776318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.785195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.785211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.785217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.793987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.794005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.794012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.803235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.803252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.803258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.811847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.811864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.811870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.821121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.821137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.821144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.829343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.829360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.829366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.840479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.840496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.840502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.851250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.851267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.851273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.863245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.863261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.863271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.874520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.874537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.874543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.885888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.885904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.885910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.894094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.894110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.894116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.905897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.905914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.905920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.915352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.915368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.915374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.923385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.923402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.923408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.933181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.933197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.933204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.272 [2024-11-29 13:16:01.941120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.272 [2024-11-29 13:16:01.941137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-11-29 13:16:01.941143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.533 [2024-11-29 13:16:01.950869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.533 [2024-11-29 13:16:01.950889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.533 [2024-11-29 13:16:01.950896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.533 [2024-11-29 13:16:01.961149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.533 [2024-11-29 13:16:01.961168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.533 [2024-11-29 13:16:01.961175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:01.969611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:01.969628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:01.969634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:01.981715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:01.981731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:01.981737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:01.992567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:01.992584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:01.992590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.003849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.003866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.003872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.012852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.012868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.012874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.022098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.022115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.022121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.030701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.030717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.030723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.039812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.039827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.039833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.048627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.048643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.048649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.057117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.057133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.057139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.066703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.066720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.066726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.075862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.075879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.075885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.083744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.083761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.083767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.093394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.093410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.093417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.103858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.103875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.103881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.114380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.114400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.114406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.122066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.122083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.122089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.132567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.132583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.132590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.144074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.144091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.144097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.152754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.152771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.152778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.162412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.162428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.162434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.171547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.171564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.171570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.181046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.181062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.181069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.189634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.189651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.189657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.198647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.198664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.198670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.534 [2024-11-29 13:16:02.207266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.534 [2024-11-29 13:16:02.207282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.534 [2024-11-29 13:16:02.207289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.215666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.215683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.215689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.225822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.225839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.225845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.237567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.237584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.237590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.247143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.247164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.247171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.256285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.256303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.256311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.265852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.265869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.265875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.274798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.274815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.274825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.283017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.283033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.283040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.292432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.292449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.292455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.301755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.301772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.301778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.311921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.311937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.311943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 [2024-11-29 13:16:02.320249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.320265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.320272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 26370.00 IOPS, 103.01 MiB/s [2024-11-29T12:16:02.476Z] [2024-11-29 13:16:02.333480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a03190) 00:31:59.796 [2024-11-29 13:16:02.333494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.796 [2024-11-29 13:16:02.333501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.796 00:31:59.796 Latency(us) 00:31:59.796 [2024-11-29T12:16:02.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.796 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:59.796 nvme0n1 : 2.01 26376.66 103.03 0.00 0.00 4846.36 2266.45 17585.49 00:31:59.796 [2024-11-29T12:16:02.476Z] =================================================================================================================== 00:31:59.796 [2024-11-29T12:16:02.476Z] Total : 26376.66 103.03 0.00 0.00 4846.36 2266.45 17585.49 00:31:59.796 { 00:31:59.796 "results": [ 00:31:59.796 { 00:31:59.796 "job": "nvme0n1", 00:31:59.796 "core_mask": "0x2", 00:31:59.796 "workload": "randread", 00:31:59.796 "status": "finished", 00:31:59.796 "queue_depth": 128, 00:31:59.796 "io_size": 4096, 00:31:59.796 "runtime": 2.005713, 00:31:59.796 "iops": 26376.65508475041, 00:31:59.796 "mibps": 103.0338089248063, 00:31:59.796 "io_failed": 0, 00:31:59.796 "io_timeout": 0, 00:31:59.796 "avg_latency_us": 4846.362852966379, 00:31:59.796 "min_latency_us": 2266.4533333333334, 00:31:59.796 "max_latency_us": 17585.493333333332 00:31:59.796 } 00:31:59.796 ], 00:31:59.796 "core_count": 1 00:31:59.796 } 00:31:59.796 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:59.796 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:59.796 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:59.796 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:59.796 | .driver_specific 00:31:59.796 | .nvme_error 00:31:59.796 | .status_code 00:31:59.796 | .command_transient_transport_error' 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 207 > 0 )) 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1100831 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1100831 ']' 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1100831 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100831 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100831' 00:32:00.057 killing process with pid 1100831 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1100831 00:32:00.057 Received shutdown signal, test time was about 2.000000 seconds 00:32:00.057 00:32:00.057 Latency(us) 00:32:00.057 [2024-11-29T12:16:02.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.057 [2024-11-29T12:16:02.737Z] =================================================================================================================== 00:32:00.057 [2024-11-29T12:16:02.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1100831 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1101519 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1101519 /var/tmp/bperf.sock 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1101519 ']' 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:00.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.057 13:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:00.317 [2024-11-29 13:16:02.766232] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:32:00.317 [2024-11-29 13:16:02.766292] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101519 ] 00:32:00.317 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:00.317 Zero copy mechanism will not be used. 00:32:00.317 [2024-11-29 13:16:02.847598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.317 [2024-11-29 13:16:02.876800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.887 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.887 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:00.887 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:00.887 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:01.148 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:01.148 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.148 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:01.148 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.148 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:01.148 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:01.408 nvme0n1 00:32:01.408 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:01.408 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.408 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:01.408 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.408 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:01.408 13:16:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:01.408 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:01.408 Zero copy mechanism will not be used. 00:32:01.408 Running I/O for 2 seconds... 00:32:01.408 [2024-11-29 13:16:04.033471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.408 [2024-11-29 13:16:04.033504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.408 [2024-11-29 13:16:04.033513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.408 [2024-11-29 13:16:04.043084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.408 [2024-11-29 13:16:04.043106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.408 [2024-11-29 13:16:04.043113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.408 [2024-11-29 13:16:04.054962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.408 [2024-11-29 13:16:04.054982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.408 [2024-11-29 13:16:04.054989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.408 [2024-11-29 13:16:04.065888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.408 [2024-11-29 13:16:04.065906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.408 [2024-11-29 13:16:04.065913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.409 [2024-11-29 13:16:04.075368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.409 [2024-11-29 13:16:04.075386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.409 [2024-11-29 13:16:04.075392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.409 [2024-11-29 13:16:04.078286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.409 [2024-11-29 13:16:04.078304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.409 [2024-11-29 13:16:04.078311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.409 [2024-11-29 13:16:04.083448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.409 [2024-11-29 13:16:04.083465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.409 [2024-11-29 13:16:04.083472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.094438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.094456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.094463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.104653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.104672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.104679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.110362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.110380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.110391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.115500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.115518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.115525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.123966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.123984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.123990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.128397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.128415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.128421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.137553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.137571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.137577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.144453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.144472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.144478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.153210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.153228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.153234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.160766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.160784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.160790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.167109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.167128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.167134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.171474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.171492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.171502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.179617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.179635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.179641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.190589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.190607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.190614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.195201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.195219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.195225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.202821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.202840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.202847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.212078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.212096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.212102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.224249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.224267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.224273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.236430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.236448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.236455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.249345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.249364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.249370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.261945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.261963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.261970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.670 [2024-11-29 13:16:04.274289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.670 [2024-11-29 13:16:04.274307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.670 [2024-11-29 13:16:04.274313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.671 [2024-11-29 13:16:04.286298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.671 [2024-11-29 13:16:04.286316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.671 [2024-11-29 13:16:04.286323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.671 [2024-11-29 13:16:04.298897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.671 [2024-11-29 13:16:04.298915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.671 [2024-11-29 13:16:04.298922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.671 [2024-11-29 13:16:04.310536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.671 [2024-11-29 13:16:04.310555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.671 [2024-11-29 13:16:04.310561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.671 [2024-11-29 13:16:04.322783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.671 [2024-11-29 13:16:04.322801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.671 [2024-11-29 13:16:04.322808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.671 [2024-11-29 13:16:04.333765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.671 [2024-11-29 13:16:04.333783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.671 [2024-11-29 13:16:04.333789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.671 [2024-11-29 13:16:04.345815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.671 [2024-11-29 13:16:04.345833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.671 [2024-11-29 13:16:04.345840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.932 [2024-11-29 13:16:04.356052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.932 [2024-11-29 13:16:04.356070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.932 [2024-11-29 13:16:04.356083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.932 [2024-11-29 13:16:04.367069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.932 [2024-11-29 13:16:04.367087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.932 [2024-11-29 13:16:04.367094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.932 [2024-11-29 13:16:04.378232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.932 [2024-11-29 13:16:04.378250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.932 [2024-11-29 13:16:04.378256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.932 [2024-11-29 13:16:04.389419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.932 [2024-11-29 13:16:04.389438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.932 [2024-11-29 13:16:04.389444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.932 [2024-11-29 13:16:04.400359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.932 [2024-11-29 13:16:04.400377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.932 [2024-11-29 13:16:04.400383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.932 [2024-11-29 13:16:04.412884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.932 [2024-11-29 13:16:04.412902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.932 [2024-11-29 13:16:04.412909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.932 [2024-11-29 13:16:04.423651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.932 [2024-11-29 13:16:04.423669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.932 [2024-11-29 13:16:04.423675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.435318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.435336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.435342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.445851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.445869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.445875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.455524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.455545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.455552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.466581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.466599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.466605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.478011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.478029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.478036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.488113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.488131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.488137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.498974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.498992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.498999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.510999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.511017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.511024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.521331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.521349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.521356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.532494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.532511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.532518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.542381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.542399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.542405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.554008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.554027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.554033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.564675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.564693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.564699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.571452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.571471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.571477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.581527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.581545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.581551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.593167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.593185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.593192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:01.933 [2024-11-29 13:16:04.603469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:01.933 [2024-11-29 13:16:04.603487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.933 [2024-11-29 13:16:04.603493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.612837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.612855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.612861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.624681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.624700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.624706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.636384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.636402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.636412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.647206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.647225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.647231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.659048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.659066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.659072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.669855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.669873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.669880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.681375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.681393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.681399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.688100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.688118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.688124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.697339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.697358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.697364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.707888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.707906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.707913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.718671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.718689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.718695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.726822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.726844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.726851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.738333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.738350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.738357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.749827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.749845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.749852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.762079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.762097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.762103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.771387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.195 [2024-11-29 13:16:04.771406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.195 [2024-11-29 13:16:04.771412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.195 [2024-11-29 13:16:04.783084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.196 [2024-11-29 13:16:04.783102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.196 [2024-11-29 13:16:04.783108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.196 [2024-11-29 13:16:04.793311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.196 [2024-11-29 13:16:04.793330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.196 [2024-11-29 13:16:04.793336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.196 [2024-11-29 13:16:04.804441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.196 [2024-11-29 13:16:04.804460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.196 [2024-11-29 13:16:04.804466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.196 [2024-11-29 13:16:04.815138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.196 [2024-11-29 13:16:04.815156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.196 [2024-11-29 13:16:04.815166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.196 [2024-11-29 13:16:04.824622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.196 [2024-11-29 13:16:04.824641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.196 [2024-11-29 13:16:04.824647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.196 [2024-11-29 13:16:04.836706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.196 [2024-11-29 13:16:04.836725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.196 [2024-11-29 13:16:04.836732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.196 [2024-11-29 13:16:04.844184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.196 [2024-11-29 13:16:04.844202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.196 [2024-11-29 13:16:04.844208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.196 [2024-11-29 13:16:04.853246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.196 [2024-11-29 13:16:04.853264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.196 [2024-11-29 13:16:04.853271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.196 [2024-11-29 13:16:04.864894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.196 [2024-11-29 13:16:04.864912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.196 [2024-11-29 13:16:04.864919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.874322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.874341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.874347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.880021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.880039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.880045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.891538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.891557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.891563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.900724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.900742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.900751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.909633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.909652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.909658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.920513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.920531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.920537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.932432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.932451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.932457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.943708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.943726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.943732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.954658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.954676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.954682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.964370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.964389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.964395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.974189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.974207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.974213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.985611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.985630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.985637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:04.997041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:04.997062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:04.997069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:05.005705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:05.005723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:05.005729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.458 [2024-11-29 13:16:05.017163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.458 [2024-11-29 13:16:05.017181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.458 [2024-11-29 13:16:05.017187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.458 3065.00 IOPS, 383.12 MiB/s [2024-11-29T12:16:05.139Z] [2024-11-29 13:16:05.028756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.028774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.028781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.040379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.040398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.040405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.049540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.049558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.049564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.059742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.059761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.059768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.071019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.071037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.071044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.082490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.082509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.082516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.093478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.093497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.093503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.105011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.105029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.105036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.117038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.117055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.117062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.120783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.120800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.120806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.459 [2024-11-29 13:16:05.131572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.459 [2024-11-29 13:16:05.131590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.459 [2024-11-29 13:16:05.131596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.141519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.141537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.141543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.152856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.152873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.152879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.163008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.163025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.163031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.172880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.172901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.172907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.184637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.184654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.184661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.192642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.192660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.192667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.201511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.201529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.201535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.212235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.212253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.212259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.219212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.219230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.219236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.228529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.228547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.228553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.237356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.237373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.237380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.248835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.248854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.248860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.258019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.258037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.720 [2024-11-29 13:16:05.258043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.720 [2024-11-29 13:16:05.269573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.720 [2024-11-29 13:16:05.269591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.269597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.280826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.280843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.280849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.292490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.292507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.292513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.302757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.302775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.302780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.312152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.312174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.312180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.322998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.323016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.323022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.334032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.334050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.334056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.343503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.343521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.343530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.356033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.356050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.356057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.363581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.363599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.363606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.373820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.373837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.373844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.721 [2024-11-29 13:16:05.387017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.721 [2024-11-29 13:16:05.387034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.721 [2024-11-29 13:16:05.387040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.981 [2024-11-29 13:16:05.398778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.981 [2024-11-29 13:16:05.398795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.981 [2024-11-29 13:16:05.398801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.410146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.410167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.410174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.420042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.420059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.420066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.430876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.430893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.430900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.442251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.442272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.442278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.453453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.453471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.453477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.464954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.464971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.464977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.475661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.475678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.475685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.488655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.488672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.488678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.500946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.500963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.500969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.512743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.512760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.512767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.525387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.525404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.525410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.535110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.535126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.535133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.546004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.546021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.546027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.558507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.558523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.558530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.570085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.570103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.570109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.581513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.581530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.581537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.593803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.593820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.593826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.605475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.605492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.605498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.617246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.617263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.617269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.628472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.628489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.628495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.639100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.639116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.639126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:02.982 [2024-11-29 13:16:05.650479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:02.982 [2024-11-29 13:16:05.650496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.982 [2024-11-29 13:16:05.650503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.245 [2024-11-29 13:16:05.660751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.245 [2024-11-29 13:16:05.660768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.245 [2024-11-29 13:16:05.660774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.245 [2024-11-29 13:16:05.671370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.245 [2024-11-29 13:16:05.671387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.245 [2024-11-29 13:16:05.671393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.245 [2024-11-29 13:16:05.683294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.245 [2024-11-29 13:16:05.683311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.245 [2024-11-29 13:16:05.683317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.245 [2024-11-29 13:16:05.694990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.245 [2024-11-29 13:16:05.695007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.245 [2024-11-29 13:16:05.695013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.245 [2024-11-29 13:16:05.708095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.245 [2024-11-29 13:16:05.708112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.245 [2024-11-29 13:16:05.708119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.245 [2024-11-29 13:16:05.717858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.245 [2024-11-29 13:16:05.717876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.245 [2024-11-29 13:16:05.717882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.245 [2024-11-29 13:16:05.729293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.245 [2024-11-29 13:16:05.729310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.245 [2024-11-29 13:16:05.729316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.245 [2024-11-29 13:16:05.741721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.245 [2024-11-29 13:16:05.741745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.741752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.753366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.753383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.753389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.765059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.765077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.765083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.777194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.777218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.777225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.789431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.789448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.789454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.798448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.798466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.798472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.809961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.809978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.809985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.821774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.821791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.821797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.833137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.833154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.833166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.844104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.844121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.844127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.856043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.856060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.856066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.865274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.865292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.865298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.877545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.877563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.877569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.885763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.885781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.885787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.897222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.897239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.897245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.910498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.910515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.910522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.246 [2024-11-29 13:16:05.921774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.246 [2024-11-29 13:16:05.921791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.246 [2024-11-29 13:16:05.921797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.507 [2024-11-29 13:16:05.934636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.507 [2024-11-29 13:16:05.934654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.507 [2024-11-29 13:16:05.934663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.507 [2024-11-29 13:16:05.946066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.507 [2024-11-29 13:16:05.946083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.507 [2024-11-29 13:16:05.946090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.507 [2024-11-29 13:16:05.957831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.507 [2024-11-29 13:16:05.957848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.507 [2024-11-29 13:16:05.957855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.507 [2024-11-29 13:16:05.966486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.507 [2024-11-29 13:16:05.966503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.507 [2024-11-29 13:16:05.966509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.507 [2024-11-29 13:16:05.975669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.508 [2024-11-29 13:16:05.975686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.508 [2024-11-29 13:16:05.975693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.508 [2024-11-29 13:16:05.985913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.508 [2024-11-29 13:16:05.985930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.508 [2024-11-29 13:16:05.985936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.508 [2024-11-29 13:16:05.996447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.508 [2024-11-29 13:16:05.996464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.508 [2024-11-29 13:16:05.996470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.508 [2024-11-29 13:16:06.005883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.508 [2024-11-29 13:16:06.005901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.508 [2024-11-29 13:16:06.005907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.508 [2024-11-29 13:16:06.016171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.508 [2024-11-29 13:16:06.016188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.508 [2024-11-29 13:16:06.016195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.508 [2024-11-29 13:16:06.027319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x813570) 00:32:03.508 [2024-11-29 13:16:06.027339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.508 [2024-11-29 13:16:06.027346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.508 2960.50 IOPS, 370.06 MiB/s 00:32:03.508 Latency(us) 00:32:03.508 [2024-11-29T12:16:06.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.508 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:03.508 nvme0n1 : 2.00 2962.14 370.27 0.00 0.00 5398.18 914.77 14308.69 00:32:03.508 [2024-11-29T12:16:06.188Z] =================================================================================================================== 00:32:03.508 [2024-11-29T12:16:06.188Z] Total : 2962.14 370.27 0.00 0.00 5398.18 914.77 14308.69 00:32:03.508 { 00:32:03.508 "results": [ 00:32:03.508 { 00:32:03.508 "job": "nvme0n1", 00:32:03.508 "core_mask": "0x2", 00:32:03.508 "workload": "randread", 00:32:03.508 "status": "finished", 00:32:03.508 "queue_depth": 16, 00:32:03.508 "io_size": 131072, 00:32:03.508 "runtime": 2.004294, 00:32:03.508 "iops": 2962.1402848085163, 00:32:03.508 "mibps": 370.26753560106454, 00:32:03.508 "io_failed": 0, 00:32:03.508 "io_timeout": 0, 00:32:03.508 "avg_latency_us": 5398.175455617315, 00:32:03.508 "min_latency_us": 914.7733333333333, 00:32:03.508 "max_latency_us": 14308.693333333333 00:32:03.508 } 00:32:03.508 ], 00:32:03.508 "core_count": 1 00:32:03.508 } 00:32:03.508 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:03.508 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:03.508 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:03.508 | .driver_specific 00:32:03.508 | .nvme_error 00:32:03.508 | .status_code 00:32:03.508 | .command_transient_transport_error' 00:32:03.508 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1101519 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1101519 ']' 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1101519 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1101519 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1101519' 00:32:03.769 killing process with pid 1101519 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1101519 00:32:03.769 Received shutdown signal, test time was about 2.000000 seconds 00:32:03.769 00:32:03.769 Latency(us) 00:32:03.769 [2024-11-29T12:16:06.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.769 [2024-11-29T12:16:06.449Z] =================================================================================================================== 00:32:03.769 [2024-11-29T12:16:06.449Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1101519 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1102217 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1102217 /var/tmp/bperf.sock 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1102217 ']' 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:03.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:03.769 13:16:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:04.031 [2024-11-29 13:16:06.471544] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:32:04.031 [2024-11-29 13:16:06.471598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1102217 ] 00:32:04.032 [2024-11-29 13:16:06.554288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.032 [2024-11-29 13:16:06.584057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.603 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:04.604 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:04.604 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:04.604 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:04.864 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:04.865 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.865 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:04.865 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.865 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:04.865 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:05.125 nvme0n1 00:32:05.125 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:05.125 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.125 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:05.125 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.125 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:05.125 13:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:05.386 Running I/O for 2 seconds... 00:32:05.386 [2024-11-29 13:16:07.848839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef3e60 00:32:05.386 [2024-11-29 13:16:07.849810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.386 [2024-11-29 13:16:07.849840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:05.386 [2024-11-29 13:16:07.857743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee9e10 00:32:05.386 [2024-11-29 13:16:07.858719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.386 [2024-11-29 13:16:07.858737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:05.386 [2024-11-29 13:16:07.866325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eefae0 00:32:05.386 [2024-11-29 13:16:07.867309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.386 [2024-11-29 13:16:07.867326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:05.386 [2024-11-29 13:16:07.874879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efef90 00:32:05.386 [2024-11-29 13:16:07.875866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.386 [2024-11-29 13:16:07.875883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:05.386 [2024-11-29 13:16:07.883420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee9e10 00:32:05.387 [2024-11-29 13:16:07.884389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.884406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.891929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eefae0 00:32:05.387 [2024-11-29 13:16:07.892909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.892927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.900437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efef90 00:32:05.387 [2024-11-29 13:16:07.901397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.901414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.908954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee9e10 00:32:05.387 [2024-11-29 13:16:07.909948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.909965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.916821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efac10 00:32:05.387 [2024-11-29 13:16:07.917674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.917692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.926945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eebb98 00:32:05.387 [2024-11-29 13:16:07.928149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.928168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.934104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef7100 00:32:05.387 [2024-11-29 13:16:07.934869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.934886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.942491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efc128 00:32:05.387 [2024-11-29 13:16:07.943221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.943237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.950968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efd208 00:32:05.387 [2024-11-29 13:16:07.951740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.951757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.959458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efe2e8 00:32:05.387 [2024-11-29 13:16:07.960208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.960224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.967971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efb8b8 00:32:05.387 [2024-11-29 13:16:07.968735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.968752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.976452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eebb98 00:32:05.387 [2024-11-29 13:16:07.977212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.977229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.984943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eecc78 00:32:05.387 [2024-11-29 13:16:07.985692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.985709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:07.992856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef81e0 00:32:05.387 [2024-11-29 13:16:07.993605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:07.993620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:08.002465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6738 00:32:05.387 [2024-11-29 13:16:08.003218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:08.003234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:08.010964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef5378 00:32:05.387 [2024-11-29 13:16:08.011771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:08.011787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:08.019472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef9f68 00:32:05.387 [2024-11-29 13:16:08.020241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:08.020257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:08.027974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef35f0 00:32:05.387 [2024-11-29 13:16:08.028721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:08.028737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:08.036496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6738 00:32:05.387 [2024-11-29 13:16:08.037256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:08.037273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:08.044988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef5378 00:32:05.387 [2024-11-29 13:16:08.045786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:08.045802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:08.053505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef9f68 00:32:05.387 [2024-11-29 13:16:08.054294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:08.054313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.387 [2024-11-29 13:16:08.061996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef35f0 00:32:05.387 [2024-11-29 13:16:08.062795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.387 [2024-11-29 13:16:08.062811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.649 [2024-11-29 13:16:08.070514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6738 00:32:05.649 [2024-11-29 13:16:08.071299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.649 [2024-11-29 13:16:08.071315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.649 [2024-11-29 13:16:08.079019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef5378 00:32:05.649 [2024-11-29 13:16:08.079818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.079834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.087509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef9f68 00:32:05.650 [2024-11-29 13:16:08.088281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.088297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.096013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef35f0 00:32:05.650 [2024-11-29 13:16:08.096802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.096818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.104525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6738 00:32:05.650 [2024-11-29 13:16:08.105278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.105294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.113033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef5378 00:32:05.650 [2024-11-29 13:16:08.113813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.113829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.121541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef9f68 00:32:05.650 [2024-11-29 13:16:08.122309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.122325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.130413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef35f0 00:32:05.650 [2024-11-29 13:16:08.131367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.131383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.138821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016edf988 00:32:05.650 [2024-11-29 13:16:08.139803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.139820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.146773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef5be8 00:32:05.650 [2024-11-29 13:16:08.147736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.147751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.156043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee84c0 00:32:05.650 [2024-11-29 13:16:08.157022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.157039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.164551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef81e0 00:32:05.650 [2024-11-29 13:16:08.165531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.165547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.173043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eed0b0 00:32:05.650 [2024-11-29 13:16:08.174017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.174034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.181553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee84c0 00:32:05.650 [2024-11-29 13:16:08.182552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.182569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.190045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef81e0 00:32:05.650 [2024-11-29 13:16:08.191022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.191038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.197474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efdeb0 00:32:05.650 [2024-11-29 13:16:08.198113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.198129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.207460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee0a68 00:32:05.650 [2024-11-29 13:16:08.208548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.208564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.215965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee49b0 00:32:05.650 [2024-11-29 13:16:08.217046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.217063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.223939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee4de8 00:32:05.650 [2024-11-29 13:16:08.224896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.224912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.231890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efef90 00:32:05.650 [2024-11-29 13:16:08.232742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.232758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.240699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee0a68 00:32:05.650 [2024-11-29 13:16:08.241515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.241532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.249344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5658 00:32:05.650 [2024-11-29 13:16:08.250203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.250219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.257840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee4578 00:32:05.650 [2024-11-29 13:16:08.258708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.258724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.266424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016edf550 00:32:05.650 [2024-11-29 13:16:08.267309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.267324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.274913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5a90 00:32:05.650 [2024-11-29 13:16:08.275736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.275754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.283397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef3a28 00:32:05.650 [2024-11-29 13:16:08.284228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.284244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.291906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef4b08 00:32:05.650 [2024-11-29 13:16:08.292784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.650 [2024-11-29 13:16:08.292799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.650 [2024-11-29 13:16:08.300422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee38d0 00:32:05.651 [2024-11-29 13:16:08.301300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.651 [2024-11-29 13:16:08.301316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.651 [2024-11-29 13:16:08.308913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016edf988 00:32:05.651 [2024-11-29 13:16:08.309775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.651 [2024-11-29 13:16:08.309792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.651 [2024-11-29 13:16:08.317417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eec840 00:32:05.651 [2024-11-29 13:16:08.318285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.651 [2024-11-29 13:16:08.318300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.651 [2024-11-29 13:16:08.325895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eed920 00:32:05.651 [2024-11-29 13:16:08.326749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.651 [2024-11-29 13:16:08.326766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.334404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eeea00 00:32:05.913 [2024-11-29 13:16:08.335263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.335279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.342912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef0bc0 00:32:05.913 [2024-11-29 13:16:08.343774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.343790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.351406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef1ca0 00:32:05.913 [2024-11-29 13:16:08.352279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.352295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.359882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee84c0 00:32:05.913 [2024-11-29 13:16:08.360745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.360761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.368369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6738 00:32:05.913 [2024-11-29 13:16:08.369229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.369245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.376872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee0ea0 00:32:05.913 [2024-11-29 13:16:08.377734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.377750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.385386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016edfdc0 00:32:05.913 [2024-11-29 13:16:08.386224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.386240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.393892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee49b0 00:32:05.913 [2024-11-29 13:16:08.394748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.394764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.402388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef1430 00:32:05.913 [2024-11-29 13:16:08.403241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.403257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.410869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee8088 00:32:05.913 [2024-11-29 13:16:08.411736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.411753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.420432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef35f0 00:32:05.913 [2024-11-29 13:16:08.421763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.421778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.428524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef1ca0 00:32:05.913 [2024-11-29 13:16:08.429890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.429906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.436270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5ec8 00:32:05.913 [2024-11-29 13:16:08.437024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.437040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.445454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6b70 00:32:05.913 [2024-11-29 13:16:08.446748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.446764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.453287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef6020 00:32:05.913 [2024-11-29 13:16:08.454019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.454034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.461759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eee190 00:32:05.913 [2024-11-29 13:16:08.462492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.462509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.470270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee49b0 00:32:05.913 [2024-11-29 13:16:08.471005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.471021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.478944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef6020 00:32:05.913 [2024-11-29 13:16:08.479678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.479694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.488395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef8618 00:32:05.913 [2024-11-29 13:16:08.489488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.489504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.497495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6300 00:32:05.913 [2024-11-29 13:16:08.498597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.498617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.505903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef6020 00:32:05.913 [2024-11-29 13:16:08.506981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.913 [2024-11-29 13:16:08.506996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.913 [2024-11-29 13:16:08.514405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef7970 00:32:05.914 [2024-11-29 13:16:08.515456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.914 [2024-11-29 13:16:08.515472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.914 [2024-11-29 13:16:08.522900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef46d0 00:32:05.914 [2024-11-29 13:16:08.523990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.914 [2024-11-29 13:16:08.524006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.914 [2024-11-29 13:16:08.531399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eeb760 00:32:05.914 [2024-11-29 13:16:08.532439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.914 [2024-11-29 13:16:08.532454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.914 [2024-11-29 13:16:08.539906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eef270 00:32:05.914 [2024-11-29 13:16:08.540993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.914 [2024-11-29 13:16:08.541009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.914 [2024-11-29 13:16:08.548388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef4f40 00:32:05.914 [2024-11-29 13:16:08.549450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.914 [2024-11-29 13:16:08.549466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.914 [2024-11-29 13:16:08.556881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef8618 00:32:05.914 [2024-11-29 13:16:08.557960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.914 [2024-11-29 13:16:08.557976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.914 [2024-11-29 13:16:08.565390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee88f8 00:32:05.914 [2024-11-29 13:16:08.566477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.914 [2024-11-29 13:16:08.566493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.914 [2024-11-29 13:16:08.573891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef9b30 00:32:05.914 [2024-11-29 13:16:08.574964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.914 [2024-11-29 13:16:08.574982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:05.914 [2024-11-29 13:16:08.582403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016edf118 00:32:05.914 [2024-11-29 13:16:08.583465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.914 [2024-11-29 13:16:08.583481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.176 [2024-11-29 13:16:08.590892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef81e0 00:32:06.176 [2024-11-29 13:16:08.591962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.176 [2024-11-29 13:16:08.591978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.176 [2024-11-29 13:16:08.599366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef35f0 00:32:06.176 [2024-11-29 13:16:08.600438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.176 [2024-11-29 13:16:08.600453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.176 [2024-11-29 13:16:08.607910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eed920 00:32:06.176 [2024-11-29 13:16:08.609004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.176 [2024-11-29 13:16:08.609019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.176 [2024-11-29 13:16:08.616417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eefae0 00:32:06.176 [2024-11-29 13:16:08.617456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.176 [2024-11-29 13:16:08.617471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.176 [2024-11-29 13:16:08.624914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efda78 00:32:06.176 [2024-11-29 13:16:08.626007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.176 [2024-11-29 13:16:08.626023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.176 [2024-11-29 13:16:08.633409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef20d8 00:32:06.176 [2024-11-29 13:16:08.634496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.176 [2024-11-29 13:16:08.634513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.176 [2024-11-29 13:16:08.641894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6300 00:32:06.176 [2024-11-29 13:16:08.642986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.643001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.650380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef6020 00:32:06.177 [2024-11-29 13:16:08.651473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.651489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.658892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef7970 00:32:06.177 [2024-11-29 13:16:08.659963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.659979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.667468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef46d0 00:32:06.177 [2024-11-29 13:16:08.668558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.668573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.675967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eeb760 00:32:06.177 [2024-11-29 13:16:08.677048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.677063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.684441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eef270 00:32:06.177 [2024-11-29 13:16:08.685524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.685539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.692928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef4f40 00:32:06.177 [2024-11-29 13:16:08.694003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.694019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.701417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef8618 00:32:06.177 [2024-11-29 13:16:08.702451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.702467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.709905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee88f8 00:32:06.177 [2024-11-29 13:16:08.710992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.711008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.718419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef9b30 00:32:06.177 [2024-11-29 13:16:08.719487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.719502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.726903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016edf118 00:32:06.177 [2024-11-29 13:16:08.727974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.727989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.735387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef81e0 00:32:06.177 [2024-11-29 13:16:08.736469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.736485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.743873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef35f0 00:32:06.177 [2024-11-29 13:16:08.744954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.744969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.752373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eed920 00:32:06.177 [2024-11-29 13:16:08.753420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.753435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.760883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eefae0 00:32:06.177 [2024-11-29 13:16:08.761967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.761983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.769380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efda78 00:32:06.177 [2024-11-29 13:16:08.770426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.770441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.777860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef20d8 00:32:06.177 [2024-11-29 13:16:08.778939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.778954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.786355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6300 00:32:06.177 [2024-11-29 13:16:08.787441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.787456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.794841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef6020 00:32:06.177 [2024-11-29 13:16:08.795871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.795889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.803340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef7970 00:32:06.177 [2024-11-29 13:16:08.804419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.804435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.811830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef46d0 00:32:06.177 [2024-11-29 13:16:08.812914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.812930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.820328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eeb760 00:32:06.177 [2024-11-29 13:16:08.821414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.821430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.828822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eef270 00:32:06.177 [2024-11-29 13:16:08.829889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.829905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 [2024-11-29 13:16:08.837315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef4f40 00:32:06.177 [2024-11-29 13:16:08.838840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.838856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.177 29817.00 IOPS, 116.47 MiB/s [2024-11-29T12:16:08.857Z] [2024-11-29 13:16:08.845799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ede470 00:32:06.177 [2024-11-29 13:16:08.846896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.177 [2024-11-29 13:16:08.846913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.854305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef5be8 00:32:06.440 [2024-11-29 13:16:08.855392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.855407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.862798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee7818 00:32:06.440 [2024-11-29 13:16:08.863892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.863908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.871284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef7da8 00:32:06.440 [2024-11-29 13:16:08.872371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.872386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.879755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6fa8 00:32:06.440 [2024-11-29 13:16:08.880839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.880854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.888245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef4298 00:32:06.440 [2024-11-29 13:16:08.889278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.889294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.896736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee7c50 00:32:06.440 [2024-11-29 13:16:08.897814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.897830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.905237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eecc78 00:32:06.440 [2024-11-29 13:16:08.906298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.906314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.913728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eedd58 00:32:06.440 [2024-11-29 13:16:08.914812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.914827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.922249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eef6a8 00:32:06.440 [2024-11-29 13:16:08.923358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.923374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.930732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee3060 00:32:06.440 [2024-11-29 13:16:08.931812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.931827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.440 [2024-11-29 13:16:08.939321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efe720 00:32:06.440 [2024-11-29 13:16:08.940386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.440 [2024-11-29 13:16:08.940402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:08.947808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ede038 00:32:06.441 [2024-11-29 13:16:08.948898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:08.948913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:08.956318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef1ca0 00:32:06.441 [2024-11-29 13:16:08.957362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:08.957378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:08.964793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee84c0 00:32:06.441 [2024-11-29 13:16:08.965887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:08.965903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:08.973277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6738 00:32:06.441 [2024-11-29 13:16:08.974338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:08.974354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:08.981776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efa7d8 00:32:06.441 [2024-11-29 13:16:08.982871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:08.982887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:08.990287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef8e88 00:32:06.441 [2024-11-29 13:16:08.991375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:08.991390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:08.998771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef20d8 00:32:06.441 [2024-11-29 13:16:08.999847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:08.999862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.007246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efeb58 00:32:06.441 [2024-11-29 13:16:09.008339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.008354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.015729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6300 00:32:06.441 [2024-11-29 13:16:09.016829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.016847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.024218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efa3a0 00:32:06.441 [2024-11-29 13:16:09.025316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.025332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.032712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef6020 00:32:06.441 [2024-11-29 13:16:09.033765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.033781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.041212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee73e0 00:32:06.441 [2024-11-29 13:16:09.042244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.042260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.049735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef7970 00:32:06.441 [2024-11-29 13:16:09.050836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.050852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.058213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee23b8 00:32:06.441 [2024-11-29 13:16:09.059292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.059307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.066678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef46d0 00:32:06.441 [2024-11-29 13:16:09.067759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.067774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.075164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5ec8 00:32:06.441 [2024-11-29 13:16:09.076244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.076259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.083649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eeb760 00:32:06.441 [2024-11-29 13:16:09.084736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.084752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.092428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eea248 00:32:06.441 [2024-11-29 13:16:09.093622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.093637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.100230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eee190 00:32:06.441 [2024-11-29 13:16:09.101299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.101316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.108985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5a90 00:32:06.441 [2024-11-29 13:16:09.110064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.441 [2024-11-29 13:16:09.110080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:06.441 [2024-11-29 13:16:09.117491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee23b8 00:32:06.703 [2024-11-29 13:16:09.118566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.703 [2024-11-29 13:16:09.118582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:06.703 [2024-11-29 13:16:09.126003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5a90 00:32:06.703 [2024-11-29 13:16:09.127084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.703 [2024-11-29 13:16:09.127100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:06.703 [2024-11-29 13:16:09.134524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee23b8 00:32:06.703 [2024-11-29 13:16:09.135613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.703 [2024-11-29 13:16:09.135629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:06.703 [2024-11-29 13:16:09.143034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5a90 00:32:06.703 [2024-11-29 13:16:09.144114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.703 [2024-11-29 13:16:09.144130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:06.703 [2024-11-29 13:16:09.151534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee23b8 00:32:06.703 [2024-11-29 13:16:09.152620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.703 [2024-11-29 13:16:09.152635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:06.703 [2024-11-29 13:16:09.160024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5a90 00:32:06.703 [2024-11-29 13:16:09.161105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.703 [2024-11-29 13:16:09.161120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:06.703 [2024-11-29 13:16:09.168538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee23b8 00:32:06.703 [2024-11-29 13:16:09.169611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.703 [2024-11-29 13:16:09.169627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:06.703 [2024-11-29 13:16:09.176364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efd208 00:32:06.704 [2024-11-29 13:16:09.177329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.177345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.185148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef3e60 00:32:06.704 [2024-11-29 13:16:09.186122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.186138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.193783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eea248 00:32:06.704 [2024-11-29 13:16:09.194752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.194769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.202256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef6cc8 00:32:06.704 [2024-11-29 13:16:09.203237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.203253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.210745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef0350 00:32:06.704 [2024-11-29 13:16:09.211717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.211733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.219248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efef90 00:32:06.704 [2024-11-29 13:16:09.220234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.220251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.227740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016edfdc0 00:32:06.704 [2024-11-29 13:16:09.228714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.228730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.236230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee49b0 00:32:06.704 [2024-11-29 13:16:09.237198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.237216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.244716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef1430 00:32:06.704 [2024-11-29 13:16:09.245693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.245708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.253188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee95a0 00:32:06.704 [2024-11-29 13:16:09.254155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.254173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.261676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef0bc0 00:32:06.704 [2024-11-29 13:16:09.262646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.262662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.270172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef92c0 00:32:06.704 [2024-11-29 13:16:09.271144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.271162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.278666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016edf988 00:32:06.704 [2024-11-29 13:16:09.279651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.279667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.287133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef2d80 00:32:06.704 [2024-11-29 13:16:09.288119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.288134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.295620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efd640 00:32:06.704 [2024-11-29 13:16:09.296594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.296610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.304104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ede038 00:32:06.704 [2024-11-29 13:16:09.305093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.305109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.312609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efe720 00:32:06.704 [2024-11-29 13:16:09.313581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.313597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.321107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ede8a8 00:32:06.704 [2024-11-29 13:16:09.322092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.322107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.329596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eed0b0 00:32:06.704 [2024-11-29 13:16:09.330565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.330581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.338074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eea680 00:32:06.704 [2024-11-29 13:16:09.339045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.339061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.346570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef7100 00:32:06.704 [2024-11-29 13:16:09.347540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.347556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.355072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eedd58 00:32:06.704 [2024-11-29 13:16:09.356003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.356018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.363563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efdeb0 00:32:06.704 [2024-11-29 13:16:09.364526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.364542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.372048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee01f8 00:32:06.704 [2024-11-29 13:16:09.372998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.704 [2024-11-29 13:16:09.373013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.704 [2024-11-29 13:16:09.380534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee4de8 00:32:06.967 [2024-11-29 13:16:09.381480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.967 [2024-11-29 13:16:09.381496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.389012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef0ff8 00:32:06.968 [2024-11-29 13:16:09.389980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.389996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.397513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eebfd0 00:32:06.968 [2024-11-29 13:16:09.398484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.398499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.406000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef0788 00:32:06.968 [2024-11-29 13:16:09.406968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.406984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.414511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef96f8 00:32:06.968 [2024-11-29 13:16:09.415472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.415488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.422988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efc560 00:32:06.968 [2024-11-29 13:16:09.423960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.423975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.431773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee88f8 00:32:06.968 [2024-11-29 13:16:09.432857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.432873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.441068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eff3c8 00:32:06.968 [2024-11-29 13:16:09.442497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.442513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.449718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef1430 00:32:06.968 [2024-11-29 13:16:09.451134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.451150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.456793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eeee38 00:32:06.968 [2024-11-29 13:16:09.457597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.457617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.465213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efef90 00:32:06.968 [2024-11-29 13:16:09.465948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.465964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.473704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eeee38 00:32:06.968 [2024-11-29 13:16:09.474454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.474470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.483455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efef90 00:32:06.968 [2024-11-29 13:16:09.484755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.484771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.490505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efda78 00:32:06.968 [2024-11-29 13:16:09.491080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.491096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.499220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee38d0 00:32:06.968 [2024-11-29 13:16:09.500061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.500077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.507717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef20d8 00:32:06.968 [2024-11-29 13:16:09.508551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.508566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.516201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efeb58 00:32:06.968 [2024-11-29 13:16:09.517002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.517017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.524682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6300 00:32:06.968 [2024-11-29 13:16:09.525521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.525536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.533178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef2948 00:32:06.968 [2024-11-29 13:16:09.534021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.534042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.541680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eef270 00:32:06.968 [2024-11-29 13:16:09.542510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.542526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.550180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef0ff8 00:32:06.968 [2024-11-29 13:16:09.551021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.551036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.558669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee4de8 00:32:06.968 [2024-11-29 13:16:09.559503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.559518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.567144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef31b8 00:32:06.968 [2024-11-29 13:16:09.567978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.567994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.575610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef4298 00:32:06.968 [2024-11-29 13:16:09.576446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.576462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.584096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eec408 00:32:06.968 [2024-11-29 13:16:09.584940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.584955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.592587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eea248 00:32:06.968 [2024-11-29 13:16:09.593421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.593437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.601103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5658 00:32:06.968 [2024-11-29 13:16:09.601931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.968 [2024-11-29 13:16:09.601946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.968 [2024-11-29 13:16:09.609580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee0a68 00:32:06.968 [2024-11-29 13:16:09.610404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.969 [2024-11-29 13:16:09.610420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.969 [2024-11-29 13:16:09.618052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef0350 00:32:06.969 [2024-11-29 13:16:09.618886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.969 [2024-11-29 13:16:09.618901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.969 [2024-11-29 13:16:09.626560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efef90 00:32:06.969 [2024-11-29 13:16:09.627363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.969 [2024-11-29 13:16:09.627379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.969 [2024-11-29 13:16:09.635064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef6890 00:32:06.969 [2024-11-29 13:16:09.635889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.969 [2024-11-29 13:16:09.635905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.969 [2024-11-29 13:16:09.643565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef5be8 00:32:06.969 [2024-11-29 13:16:09.644367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.969 [2024-11-29 13:16:09.644382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.230 [2024-11-29 13:16:09.652049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee7818 00:32:07.231 [2024-11-29 13:16:09.652893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.652909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.660532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef7da8 00:32:07.231 [2024-11-29 13:16:09.661367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.661383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.669008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef6458 00:32:07.231 [2024-11-29 13:16:09.669838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.669853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.677497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016edece0 00:32:07.231 [2024-11-29 13:16:09.678343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.678358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.686010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5ec8 00:32:07.231 [2024-11-29 13:16:09.686844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.686860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.694625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef1430 00:32:07.231 [2024-11-29 13:16:09.695461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.695476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.703103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efbcf0 00:32:07.231 [2024-11-29 13:16:09.703951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.703967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.711590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef9f68 00:32:07.231 [2024-11-29 13:16:09.712388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.712404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.720085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ede8a8 00:32:07.231 [2024-11-29 13:16:09.720882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.720898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.728577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eed0b0 00:32:07.231 [2024-11-29 13:16:09.729414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.729430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.737079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eea680 00:32:07.231 [2024-11-29 13:16:09.737913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.737929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.745566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee5220 00:32:07.231 [2024-11-29 13:16:09.746404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.746419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.754045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef7100 00:32:07.231 [2024-11-29 13:16:09.754873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.754891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.762520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eedd58 00:32:07.231 [2024-11-29 13:16:09.763363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.763378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.771013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efdeb0 00:32:07.231 [2024-11-29 13:16:09.771847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.771863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.779519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee38d0 00:32:07.231 [2024-11-29 13:16:09.780362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.780378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.788010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef20d8 00:32:07.231 [2024-11-29 13:16:09.788840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.788855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.796478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016efeb58 00:32:07.231 [2024-11-29 13:16:09.797303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.797319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.804948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee6300 00:32:07.231 [2024-11-29 13:16:09.805777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.805793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.813434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef2948 00:32:07.231 [2024-11-29 13:16:09.814248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.814263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.821929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016eef270 00:32:07.231 [2024-11-29 13:16:09.822759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.822775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.830430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ef0ff8 00:32:07.231 [2024-11-29 13:16:09.831237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.831253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 [2024-11-29 13:16:09.838944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d3d0) with pdu=0x200016ee4de8 00:32:07.231 [2024-11-29 13:16:09.839785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:07.231 [2024-11-29 13:16:09.839801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:07.231 29944.00 IOPS, 116.97 MiB/s 00:32:07.231 Latency(us) 00:32:07.231 [2024-11-29T12:16:09.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.231 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.231 nvme0n1 : 2.00 29965.72 117.05 0.00 0.00 4266.37 2061.65 11359.57 00:32:07.231 [2024-11-29T12:16:09.911Z] =================================================================================================================== 00:32:07.231 [2024-11-29T12:16:09.911Z] Total : 29965.72 117.05 0.00 0.00 4266.37 2061.65 11359.57 00:32:07.231 { 00:32:07.231 "results": [ 00:32:07.231 { 00:32:07.231 "job": "nvme0n1", 00:32:07.231 "core_mask": "0x2", 00:32:07.231 "workload": "randwrite", 00:32:07.231 "status": "finished", 00:32:07.231 "queue_depth": 128, 00:32:07.231 "io_size": 4096, 00:32:07.231 "runtime": 2.004357, 00:32:07.231 "iops": 29965.719679677823, 00:32:07.231 "mibps": 117.0535924987415, 00:32:07.231 "io_failed": 0, 00:32:07.231 "io_timeout": 0, 00:32:07.231 "avg_latency_us": 4266.367967322655, 00:32:07.231 "min_latency_us": 2061.653333333333, 00:32:07.231 "max_latency_us": 11359.573333333334 00:32:07.231 } 00:32:07.231 ], 00:32:07.231 "core_count": 1 00:32:07.231 } 00:32:07.231 13:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:07.232 13:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:07.232 13:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:07.232 13:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:07.232 | .driver_specific 00:32:07.232 | .nvme_error 00:32:07.232 | .status_code 00:32:07.232 | .command_transient_transport_error' 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 235 > 0 )) 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1102217 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1102217 ']' 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1102217 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1102217 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1102217' 00:32:07.493 killing process with pid 1102217 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1102217 00:32:07.493 Received shutdown signal, test time was about 2.000000 seconds 00:32:07.493 00:32:07.493 Latency(us) 00:32:07.493 [2024-11-29T12:16:10.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.493 [2024-11-29T12:16:10.173Z] =================================================================================================================== 00:32:07.493 [2024-11-29T12:16:10.173Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.493 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1102217 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1102997 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1102997 /var/tmp/bperf.sock 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1102997 ']' 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:07.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.754 13:16:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:07.754 [2024-11-29 13:16:10.266216] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:32:07.754 [2024-11-29 13:16:10.266275] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1102997 ] 00:32:07.754 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:07.754 Zero copy mechanism will not be used. 00:32:07.754 [2024-11-29 13:16:10.349953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.754 [2024-11-29 13:16:10.379571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:08.697 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:08.958 nvme0n1 00:32:09.219 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:09.219 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.219 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:09.219 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.219 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:09.219 13:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:09.219 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:09.219 Zero copy mechanism will not be used. 00:32:09.220 Running I/O for 2 seconds... 00:32:09.220 [2024-11-29 13:16:11.754164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.754447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.754471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.763813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.764097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.764117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.770739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.770799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.770816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.774020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.774221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.774236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.784251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.784417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.784433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.791968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.792270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.792287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.797832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.797880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.797896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.801479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.801534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.801550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.805472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.805537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.805552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.810032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.810109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.810125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.817191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.817265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.817280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.821606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.821675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.821691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.826738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.826792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.826807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.834022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.834067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.834082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.840011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.840073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.840091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.844186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.844232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.844247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.848414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.848463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.848479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.852994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.853054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.853069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.856797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.856871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.856887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.863670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.863720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.863735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.870169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.870284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.870299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.879581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.879838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.879853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.887263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.887322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.887337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.891553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.891619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.891634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.220 [2024-11-29 13:16:11.895502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.220 [2024-11-29 13:16:11.895822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.220 [2024-11-29 13:16:11.895838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.902298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.902384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.902399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.906413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.906465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.906481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.910080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.910365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.910382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.914494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.914558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.914573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.918488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.918555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.918570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.926152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.926217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.926232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.933185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.933476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.933493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.938274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.938341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.938356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.943239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.943522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.943536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.947620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.947692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.947707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.951359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.951420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.951435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.955663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.955850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.955866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.959536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.959780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.959795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.966743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.966929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.966945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.971165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.971362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.483 [2024-11-29 13:16:11.971378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.483 [2024-11-29 13:16:11.975634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.483 [2024-11-29 13:16:11.975964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:11.975983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:11.982083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:11.982307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:11.982323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:11.986293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:11.986481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:11.986497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:11.989843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:11.990032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:11.990048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:11.993584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:11.993773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:11.993788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:11.996998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:11.997192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:11.997208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.000666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.000854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.000870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.003896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.004084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.004100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.007532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.007719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.007735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.010718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.010909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.010926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.013914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.014100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.014116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.017918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.018116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.018132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.021991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.022306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.022321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.027300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.027598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.027615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.031025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.031219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.031235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.034540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.034726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.034742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.037920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.037962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.037977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.041674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.041860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.041876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.045872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.046062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.046078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.049267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.049458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.049473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.052447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.052635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.052650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.055894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.056079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.056094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.060522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.060840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.060856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.064041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.064233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.064249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.067943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.068016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.068031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.078351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.078529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.078545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.085977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.086189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.086208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.484 [2024-11-29 13:16:12.090164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.484 [2024-11-29 13:16:12.090359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.484 [2024-11-29 13:16:12.090375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.485 [2024-11-29 13:16:12.099650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.485 [2024-11-29 13:16:12.099845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.485 [2024-11-29 13:16:12.099861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.485 [2024-11-29 13:16:12.108505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.485 [2024-11-29 13:16:12.108679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.485 [2024-11-29 13:16:12.108696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.485 [2024-11-29 13:16:12.118131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.485 [2024-11-29 13:16:12.118197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.485 [2024-11-29 13:16:12.118212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.485 [2024-11-29 13:16:12.129091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.485 [2024-11-29 13:16:12.129431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.485 [2024-11-29 13:16:12.129446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.485 [2024-11-29 13:16:12.136039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.485 [2024-11-29 13:16:12.136085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.485 [2024-11-29 13:16:12.136100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.485 [2024-11-29 13:16:12.139883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.485 [2024-11-29 13:16:12.139939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.485 [2024-11-29 13:16:12.139954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.485 [2024-11-29 13:16:12.143329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.485 [2024-11-29 13:16:12.143389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.485 [2024-11-29 13:16:12.143405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.485 [2024-11-29 13:16:12.151526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.485 [2024-11-29 13:16:12.151788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.485 [2024-11-29 13:16:12.151803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.746 [2024-11-29 13:16:12.162449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.746 [2024-11-29 13:16:12.162702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.746 [2024-11-29 13:16:12.162718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.746 [2024-11-29 13:16:12.172754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.746 [2024-11-29 13:16:12.172874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.746 [2024-11-29 13:16:12.172889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.746 [2024-11-29 13:16:12.181296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.746 [2024-11-29 13:16:12.181341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.746 [2024-11-29 13:16:12.181356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.746 [2024-11-29 13:16:12.185861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.746 [2024-11-29 13:16:12.185912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.746 [2024-11-29 13:16:12.185927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.746 [2024-11-29 13:16:12.190609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.746 [2024-11-29 13:16:12.190649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.746 [2024-11-29 13:16:12.190665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.746 [2024-11-29 13:16:12.195723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.746 [2024-11-29 13:16:12.195766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.746 [2024-11-29 13:16:12.195781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.746 [2024-11-29 13:16:12.204106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.746 [2024-11-29 13:16:12.204164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.746 [2024-11-29 13:16:12.204180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.746 [2024-11-29 13:16:12.211430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.746 [2024-11-29 13:16:12.211690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.211705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.219345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.219642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.219657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.226665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.226712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.226728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.233914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.233975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.233991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.239562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.239629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.239644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.245051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.245103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.245119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.252720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.252785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.252800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.257678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.257722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.257737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.263716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.264028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.264044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.271365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.271634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.271653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.276324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.276369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.276385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.282023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.282074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.282089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.288003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.288065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.288080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.293988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.294032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.294047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.300769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.300828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.300844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.308798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.308966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.308981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.314919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.314962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.314977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.321374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.321428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.321443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.329331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.329642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.329658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.338745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.339047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.339063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.346132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.346202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.346218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.351969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.352113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.352129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.359028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.359342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.359358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.365068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.365397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.365413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.372683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.372737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.372753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.377572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.377621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.377636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.385327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.385404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.747 [2024-11-29 13:16:12.385419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:09.747 [2024-11-29 13:16:12.393275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.747 [2024-11-29 13:16:12.393336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.748 [2024-11-29 13:16:12.393352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:09.748 [2024-11-29 13:16:12.400906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.748 [2024-11-29 13:16:12.400965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.748 [2024-11-29 13:16:12.400980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:09.748 [2024-11-29 13:16:12.410178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.748 [2024-11-29 13:16:12.410235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.748 [2024-11-29 13:16:12.410250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:09.748 [2024-11-29 13:16:12.417617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:09.748 [2024-11-29 13:16:12.417667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.748 [2024-11-29 13:16:12.417682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.424831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.424894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.424909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.434061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.434278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.434293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.440866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.440923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.440938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.449316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.449374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.449389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.455022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.455076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.455093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.459761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.459851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.459866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.470391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.470461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.470476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.480897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.481177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.481192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.491008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.491092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.491107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.502387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.502652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.009 [2024-11-29 13:16:12.502667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.009 [2024-11-29 13:16:12.513703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.009 [2024-11-29 13:16:12.513960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.513975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.524467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.524701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.524716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.534088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.534151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.534170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.544787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.545059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.545076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.555414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.555610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.555625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.566078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.566408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.566424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.577759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.577941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.577956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.589448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.589794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.589810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.601004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.601235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.601251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.612052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.612325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.612341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.622704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.622946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.622961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.633957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.634180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.634196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.645197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.645469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.645485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.656519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.656614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.656628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.667023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.667286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.667300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.010 [2024-11-29 13:16:12.678744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.010 [2024-11-29 13:16:12.678987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.010 [2024-11-29 13:16:12.679003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.689926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.690218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.690233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.700981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.701184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.701200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.711219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.711449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.711464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.721924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.722308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.722324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.730861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.731115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.731131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.740444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.740616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.740632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.748905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.749010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.749025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.271 4543.00 IOPS, 567.88 MiB/s [2024-11-29T12:16:12.951Z] [2024-11-29 13:16:12.759673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.759901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.759916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.770318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.770567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.770582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.781708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.781988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.782004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.793574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.793823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.793838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.805107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.805340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.805356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.271 [2024-11-29 13:16:12.815921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.271 [2024-11-29 13:16:12.816214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.271 [2024-11-29 13:16:12.816230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.827174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.827420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.827435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.838570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.838760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.838775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.849668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.849993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.850009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.858679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.858729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.858744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.864083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.864376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.864392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.873787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.874097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.874113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.881993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.882047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.882062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.887827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.887883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.887898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.895470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.895579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.895594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.902961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.903018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.903034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.907877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.907926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.907941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.916185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.916271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.916286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.924293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.924357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.924373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.933221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.933280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.933295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.272 [2024-11-29 13:16:12.940309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.272 [2024-11-29 13:16:12.940353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.272 [2024-11-29 13:16:12.940368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:12.949738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:12.949936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:12.949951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:12.957130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:12.957188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:12.957203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:12.962995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:12.963051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:12.963068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:12.970185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:12.970231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:12.970246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:12.976766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:12.977033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:12.977049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:12.983894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:12.984125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:12.984140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:12.988726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:12.989031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:12.989046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:12.997432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:12.997480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:12.997495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.006363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.006580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.006595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.013454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.013504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.013518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.020528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.020726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.020741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.028278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.028357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.028374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.035983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.036080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.036095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.043834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.044117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.044133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.053300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.053381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.053396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.060454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.060500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.060515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.069201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.069459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.069474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.076452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.076509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.076525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.083039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.083107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.083122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.092896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.093216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.093231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.100191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.535 [2024-11-29 13:16:13.100255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.535 [2024-11-29 13:16:13.100270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.535 [2024-11-29 13:16:13.105929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.105976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.105991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.112993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.113046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.113061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.118493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.118536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.118552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.125595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.126015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.126031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.130437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.130498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.130513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.134986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.135041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.135056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.143840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.143888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.143903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.150585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.150678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.150693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.157216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.157322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.157337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.165091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.165143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.165163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.173868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.174120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.174136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.181883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.181962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.181977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.188573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.188650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.188665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.195269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.195523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.195539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.203469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.203540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.203555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.536 [2024-11-29 13:16:13.209136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.536 [2024-11-29 13:16:13.209206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.536 [2024-11-29 13:16:13.209221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.797 [2024-11-29 13:16:13.216423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.797 [2024-11-29 13:16:13.216481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.797 [2024-11-29 13:16:13.216499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.797 [2024-11-29 13:16:13.221581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.797 [2024-11-29 13:16:13.221663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.797 [2024-11-29 13:16:13.221678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.797 [2024-11-29 13:16:13.225885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.797 [2024-11-29 13:16:13.225964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.797 [2024-11-29 13:16:13.225978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.797 [2024-11-29 13:16:13.233725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.797 [2024-11-29 13:16:13.233810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.797 [2024-11-29 13:16:13.233827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.797 [2024-11-29 13:16:13.238918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.797 [2024-11-29 13:16:13.238976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.797 [2024-11-29 13:16:13.238991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.797 [2024-11-29 13:16:13.243075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.797 [2024-11-29 13:16:13.243122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.797 [2024-11-29 13:16:13.243136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.797 [2024-11-29 13:16:13.249367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.797 [2024-11-29 13:16:13.249430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.797 [2024-11-29 13:16:13.249445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.797 [2024-11-29 13:16:13.257033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.797 [2024-11-29 13:16:13.257080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.797 [2024-11-29 13:16:13.257095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.797 [2024-11-29 13:16:13.264234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.797 [2024-11-29 13:16:13.264302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.797 [2024-11-29 13:16:13.264316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.272308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.272450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.272465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.280909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.281066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.281082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.286108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.286207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.286222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.292883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.293171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.293187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.300190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.300238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.300253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.305899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.305950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.305965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.312201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.312246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.312261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.317855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.318161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.318177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.324866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.324939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.324954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.330896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.331150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.331170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.337471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.337517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.337531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.343179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.343242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.343257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.348820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.349126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.349142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.355558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.355623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.355638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.360270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.360381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.360396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.368281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.368584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.368600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.378912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.379201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.379218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.389869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.390121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.390147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.400263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.400485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.400500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.410675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.410930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.410945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.420664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.420894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.420909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.431366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.431592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.431607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.441445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.441786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.441801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.451379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.451627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.451648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.461831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.461876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.461891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:10.798 [2024-11-29 13:16:13.471273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:10.798 [2024-11-29 13:16:13.471566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.798 [2024-11-29 13:16:13.471582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.060 [2024-11-29 13:16:13.481907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.060 [2024-11-29 13:16:13.482150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.060 [2024-11-29 13:16:13.482171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.060 [2024-11-29 13:16:13.492511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.060 [2024-11-29 13:16:13.492798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.060 [2024-11-29 13:16:13.492814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.060 [2024-11-29 13:16:13.504072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.060 [2024-11-29 13:16:13.504185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.060 [2024-11-29 13:16:13.504201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.060 [2024-11-29 13:16:13.514819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.060 [2024-11-29 13:16:13.515089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.060 [2024-11-29 13:16:13.515106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.060 [2024-11-29 13:16:13.525351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.060 [2024-11-29 13:16:13.525430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.060 [2024-11-29 13:16:13.525445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.537353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.537629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.537645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.547923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.548099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.548114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.558945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.559230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.559246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.570047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.570281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.570296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.577992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.578241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.578257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.588881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.589190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.589207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.599191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.599250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.599265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.603451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.603537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.603552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.608721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.608791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.608806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.613263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.613310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.613325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.617622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.617684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.617699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.621588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.621630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.621645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.625686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.625730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.625748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.630716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.630778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.630793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.636678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.636735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.636750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.645096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.645366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.645382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.650177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.650223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.650238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.654920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.654993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.655008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.662773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.662819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.662834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.667628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.667674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.667689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.672388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.672434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.672449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.676910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.676967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.676983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.681659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.681705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.681720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.686288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.686559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.686574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.694198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.694438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.694453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.701405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.701460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.061 [2024-11-29 13:16:13.701475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.061 [2024-11-29 13:16:13.705108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.061 [2024-11-29 13:16:13.705157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.062 [2024-11-29 13:16:13.705178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.062 [2024-11-29 13:16:13.710218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.062 [2024-11-29 13:16:13.710264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.062 [2024-11-29 13:16:13.710278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.062 [2024-11-29 13:16:13.714157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.062 [2024-11-29 13:16:13.714228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.062 [2024-11-29 13:16:13.714243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.062 [2024-11-29 13:16:13.718341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.062 [2024-11-29 13:16:13.718388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.062 [2024-11-29 13:16:13.718403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.062 [2024-11-29 13:16:13.722018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.062 [2024-11-29 13:16:13.722064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.062 [2024-11-29 13:16:13.722079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.062 [2024-11-29 13:16:13.725047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.062 [2024-11-29 13:16:13.725098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.062 [2024-11-29 13:16:13.725113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.062 [2024-11-29 13:16:13.728795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.062 [2024-11-29 13:16:13.728999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.062 [2024-11-29 13:16:13.729014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.062 [2024-11-29 13:16:13.736393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.062 [2024-11-29 13:16:13.736703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.062 [2024-11-29 13:16:13.736718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.323 [2024-11-29 13:16:13.742594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.323 [2024-11-29 13:16:13.742656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.323 [2024-11-29 13:16:13.742671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.323 [2024-11-29 13:16:13.748900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.323 [2024-11-29 13:16:13.748946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.323 [2024-11-29 13:16:13.748961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.323 [2024-11-29 13:16:13.754393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.323 [2024-11-29 13:16:13.754438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.323 [2024-11-29 13:16:13.754453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.323 4365.50 IOPS, 545.69 MiB/s [2024-11-29T12:16:14.003Z] [2024-11-29 13:16:13.758712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd4d710) with pdu=0x200016eff3c8 00:32:11.323 [2024-11-29 13:16:13.758765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.323 [2024-11-29 13:16:13.758780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.323 00:32:11.323 Latency(us) 00:32:11.323 [2024-11-29T12:16:14.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.323 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:11.323 nvme0n1 : 2.00 4368.65 546.08 0.00 0.00 3658.49 1358.51 12178.77 00:32:11.323 [2024-11-29T12:16:14.003Z] =================================================================================================================== 00:32:11.323 [2024-11-29T12:16:14.003Z] Total : 4368.65 546.08 0.00 0.00 3658.49 1358.51 12178.77 00:32:11.323 { 00:32:11.323 "results": [ 00:32:11.323 { 00:32:11.323 "job": "nvme0n1", 00:32:11.323 "core_mask": "0x2", 00:32:11.323 "workload": "randwrite", 00:32:11.323 "status": "finished", 00:32:11.323 "queue_depth": 16, 00:32:11.323 "io_size": 131072, 00:32:11.323 "runtime": 2.002906, 00:32:11.323 "iops": 4368.6523481381555, 00:32:11.323 "mibps": 546.0815435172694, 00:32:11.323 "io_failed": 0, 00:32:11.323 "io_timeout": 0, 00:32:11.323 "avg_latency_us": 3658.492196571429, 00:32:11.323 "min_latency_us": 1358.5066666666667, 00:32:11.323 "max_latency_us": 12178.773333333333 00:32:11.323 } 00:32:11.323 ], 00:32:11.323 "core_count": 1 00:32:11.323 } 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:11.323 | .driver_specific 00:32:11.323 | .nvme_error 00:32:11.323 | .status_code 00:32:11.323 | .command_transient_transport_error' 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 283 > 0 )) 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1102997 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1102997 ']' 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1102997 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.323 13:16:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1102997 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1102997' 00:32:11.583 killing process with pid 1102997 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1102997 00:32:11.583 Received shutdown signal, test time was about 2.000000 seconds 00:32:11.583 00:32:11.583 Latency(us) 00:32:11.583 [2024-11-29T12:16:14.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.583 [2024-11-29T12:16:14.263Z] =================================================================================================================== 00:32:11.583 [2024-11-29T12:16:14.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1102997 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1100607 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1100607 ']' 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1100607 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100607 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100607' 00:32:11.583 killing process with pid 1100607 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1100607 00:32:11.583 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1100607 00:32:11.844 00:32:11.844 real 0m16.424s 00:32:11.844 user 0m32.568s 00:32:11.844 sys 0m3.569s 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:11.844 ************************************ 00:32:11.844 END TEST nvmf_digest_error 00:32:11.844 ************************************ 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:11.844 rmmod nvme_tcp 00:32:11.844 rmmod nvme_fabrics 00:32:11.844 rmmod nvme_keyring 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1100607 ']' 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1100607 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1100607 ']' 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1100607 00:32:11.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1100607) - No such process 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1100607 is not found' 00:32:11.844 Process with pid 1100607 is not found 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.844 13:16:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:14.389 00:32:14.389 real 0m43.154s 00:32:14.389 user 1m7.622s 00:32:14.389 sys 0m13.118s 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:14.389 ************************************ 00:32:14.389 END TEST nvmf_digest 00:32:14.389 ************************************ 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.389 ************************************ 00:32:14.389 START TEST nvmf_bdevperf 00:32:14.389 ************************************ 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:14.389 * Looking for test storage... 00:32:14.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.389 --rc genhtml_branch_coverage=1 00:32:14.389 --rc genhtml_function_coverage=1 00:32:14.389 --rc genhtml_legend=1 00:32:14.389 --rc geninfo_all_blocks=1 00:32:14.389 --rc geninfo_unexecuted_blocks=1 00:32:14.389 00:32:14.389 ' 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.389 --rc genhtml_branch_coverage=1 00:32:14.389 --rc genhtml_function_coverage=1 00:32:14.389 --rc genhtml_legend=1 00:32:14.389 --rc geninfo_all_blocks=1 00:32:14.389 --rc geninfo_unexecuted_blocks=1 00:32:14.389 00:32:14.389 ' 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.389 --rc genhtml_branch_coverage=1 00:32:14.389 --rc genhtml_function_coverage=1 00:32:14.389 --rc genhtml_legend=1 00:32:14.389 --rc geninfo_all_blocks=1 00:32:14.389 --rc geninfo_unexecuted_blocks=1 00:32:14.389 00:32:14.389 ' 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.389 --rc genhtml_branch_coverage=1 00:32:14.389 --rc genhtml_function_coverage=1 00:32:14.389 --rc genhtml_legend=1 00:32:14.389 --rc geninfo_all_blocks=1 00:32:14.389 --rc geninfo_unexecuted_blocks=1 00:32:14.389 00:32:14.389 ' 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.389 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:14.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:14.390 13:16:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:22.535 13:16:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:22.535 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:22.535 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:22.535 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:22.535 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.535 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:22.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:32:22.536 00:32:22.536 --- 10.0.0.2 ping statistics --- 00:32:22.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.536 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:32:22.536 00:32:22.536 --- 10.0.0.1 ping statistics --- 00:32:22.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.536 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1107911 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1107911 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1107911 ']' 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:22.536 13:16:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:22.536 [2024-11-29 13:16:24.417252] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:32:22.536 [2024-11-29 13:16:24.417321] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.536 [2024-11-29 13:16:24.518300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:22.536 [2024-11-29 13:16:24.571180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.536 [2024-11-29 13:16:24.571229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.536 [2024-11-29 13:16:24.571238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.536 [2024-11-29 13:16:24.571245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.536 [2024-11-29 13:16:24.571251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.536 [2024-11-29 13:16:24.573377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:22.536 [2024-11-29 13:16:24.573631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:22.536 [2024-11-29 13:16:24.573632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:22.796 [2024-11-29 13:16:25.298492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:22.796 Malloc0 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:22.796 [2024-11-29 13:16:25.371477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:22.796 { 00:32:22.796 "params": { 00:32:22.796 "name": "Nvme$subsystem", 00:32:22.796 "trtype": "$TEST_TRANSPORT", 00:32:22.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:22.796 "adrfam": "ipv4", 00:32:22.796 "trsvcid": "$NVMF_PORT", 00:32:22.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:22.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:22.796 "hdgst": ${hdgst:-false}, 00:32:22.796 "ddgst": ${ddgst:-false} 00:32:22.796 }, 00:32:22.796 "method": "bdev_nvme_attach_controller" 00:32:22.796 } 00:32:22.796 EOF 00:32:22.796 )") 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:22.796 13:16:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:22.796 "params": { 00:32:22.796 "name": "Nvme1", 00:32:22.796 "trtype": "tcp", 00:32:22.796 "traddr": "10.0.0.2", 00:32:22.796 "adrfam": "ipv4", 00:32:22.796 "trsvcid": "4420", 00:32:22.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:22.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:22.796 "hdgst": false, 00:32:22.796 "ddgst": false 00:32:22.796 }, 00:32:22.796 "method": "bdev_nvme_attach_controller" 00:32:22.796 }' 00:32:22.796 [2024-11-29 13:16:25.430107] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:32:22.796 [2024-11-29 13:16:25.430181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108254 ] 00:32:23.056 [2024-11-29 13:16:25.523890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.056 [2024-11-29 13:16:25.577676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.316 Running I/O for 1 seconds... 00:32:24.260 8446.00 IOPS, 32.99 MiB/s 00:32:24.260 Latency(us) 00:32:24.260 [2024-11-29T12:16:26.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.260 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:24.260 Verification LBA range: start 0x0 length 0x4000 00:32:24.260 Nvme1n1 : 1.01 8525.07 33.30 0.00 0.00 14943.87 1235.63 14090.24 00:32:24.260 [2024-11-29T12:16:26.940Z] =================================================================================================================== 00:32:24.260 [2024-11-29T12:16:26.940Z] Total : 8525.07 33.30 0.00 0.00 14943.87 1235.63 14090.24 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1108597 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:24.521 { 00:32:24.521 "params": { 00:32:24.521 "name": "Nvme$subsystem", 00:32:24.521 "trtype": "$TEST_TRANSPORT", 00:32:24.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:24.521 "adrfam": "ipv4", 00:32:24.521 "trsvcid": "$NVMF_PORT", 00:32:24.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:24.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:24.521 "hdgst": ${hdgst:-false}, 00:32:24.521 "ddgst": ${ddgst:-false} 00:32:24.521 }, 00:32:24.521 "method": "bdev_nvme_attach_controller" 00:32:24.521 } 00:32:24.521 EOF 00:32:24.521 )") 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:24.521 13:16:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:24.521 "params": { 00:32:24.521 "name": "Nvme1", 00:32:24.521 "trtype": "tcp", 00:32:24.521 "traddr": "10.0.0.2", 00:32:24.521 "adrfam": "ipv4", 00:32:24.521 "trsvcid": "4420", 00:32:24.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:24.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:24.521 "hdgst": false, 00:32:24.521 "ddgst": false 00:32:24.521 }, 00:32:24.521 "method": "bdev_nvme_attach_controller" 00:32:24.521 }' 00:32:24.521 [2024-11-29 13:16:27.109189] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:32:24.521 [2024-11-29 13:16:27.109245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108597 ] 00:32:24.521 [2024-11-29 13:16:27.198036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.783 [2024-11-29 13:16:27.233036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.783 Running I/O for 15 seconds... 00:32:27.112 10930.00 IOPS, 42.70 MiB/s [2024-11-29T12:16:30.368Z] 10999.00 IOPS, 42.96 MiB/s [2024-11-29T12:16:30.368Z] 13:16:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1107911 00:32:27.688 13:16:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:27.688 [2024-11-29 13:16:30.071651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.071986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.071994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.072008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.688 [2024-11-29 13:16:30.072018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.688 [2024-11-29 13:16:30.072033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.689 [2024-11-29 13:16:30.072888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.689 [2024-11-29 13:16:30.072896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.072905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.072912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.072922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.072929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.072939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.072946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.072956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.072964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.072974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.072981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.072991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.072998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.690 [2024-11-29 13:16:30.073304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.690 [2024-11-29 13:16:30.073607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.690 [2024-11-29 13:16:30.073616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.691 [2024-11-29 13:16:30.073623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.073988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.073998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.691 [2024-11-29 13:16:30.074182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.074191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5e170 is same with the state(6) to be set 00:32:27.691 [2024-11-29 13:16:30.074199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.691 [2024-11-29 13:16:30.074206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.691 [2024-11-29 13:16:30.074213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100344 len:8 PRP1 0x0 PRP2 0x0 00:32:27.691 [2024-11-29 13:16:30.074225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.691 [2024-11-29 13:16:30.077862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.691 [2024-11-29 13:16:30.077916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.691 [2024-11-29 13:16:30.078703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.691 [2024-11-29 13:16:30.078720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.691 [2024-11-29 13:16:30.078729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.691 [2024-11-29 13:16:30.078948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.691 [2024-11-29 13:16:30.079175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.691 [2024-11-29 13:16:30.079184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.691 [2024-11-29 13:16:30.079192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.691 [2024-11-29 13:16:30.079201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.691 [2024-11-29 13:16:30.091903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.691 [2024-11-29 13:16:30.092566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.691 [2024-11-29 13:16:30.092607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.691 [2024-11-29 13:16:30.092618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.691 [2024-11-29 13:16:30.092859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.093083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.093092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.692 [2024-11-29 13:16:30.093100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.692 [2024-11-29 13:16:30.093108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.692 [2024-11-29 13:16:30.105794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.692 [2024-11-29 13:16:30.106517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.692 [2024-11-29 13:16:30.106558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.692 [2024-11-29 13:16:30.106569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.692 [2024-11-29 13:16:30.106808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.107031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.107040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.692 [2024-11-29 13:16:30.107048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.692 [2024-11-29 13:16:30.107057] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.692 [2024-11-29 13:16:30.119579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.692 [2024-11-29 13:16:30.120168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.692 [2024-11-29 13:16:30.120194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.692 [2024-11-29 13:16:30.120202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.692 [2024-11-29 13:16:30.120422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.120640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.120648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.692 [2024-11-29 13:16:30.120656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.692 [2024-11-29 13:16:30.120663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.692 [2024-11-29 13:16:30.133371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.692 [2024-11-29 13:16:30.134018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.692 [2024-11-29 13:16:30.134059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.692 [2024-11-29 13:16:30.134070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.692 [2024-11-29 13:16:30.134333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.134558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.134568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.692 [2024-11-29 13:16:30.134576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.692 [2024-11-29 13:16:30.134584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.692 [2024-11-29 13:16:30.147298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.692 [2024-11-29 13:16:30.147880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.692 [2024-11-29 13:16:30.147902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.692 [2024-11-29 13:16:30.147910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.692 [2024-11-29 13:16:30.148129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.148356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.148365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.692 [2024-11-29 13:16:30.148373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.692 [2024-11-29 13:16:30.148381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.692 [2024-11-29 13:16:30.161071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.692 [2024-11-29 13:16:30.161614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.692 [2024-11-29 13:16:30.161633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.692 [2024-11-29 13:16:30.161641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.692 [2024-11-29 13:16:30.161865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.162084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.162092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.692 [2024-11-29 13:16:30.162099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.692 [2024-11-29 13:16:30.162106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.692 [2024-11-29 13:16:30.175017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.692 [2024-11-29 13:16:30.175507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.692 [2024-11-29 13:16:30.175525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.692 [2024-11-29 13:16:30.175533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.692 [2024-11-29 13:16:30.175751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.175970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.175980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.692 [2024-11-29 13:16:30.175987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.692 [2024-11-29 13:16:30.175994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.692 [2024-11-29 13:16:30.188906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.692 [2024-11-29 13:16:30.189500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.692 [2024-11-29 13:16:30.189519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.692 [2024-11-29 13:16:30.189527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.692 [2024-11-29 13:16:30.189745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.189963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.189971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.692 [2024-11-29 13:16:30.189979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.692 [2024-11-29 13:16:30.189986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.692 [2024-11-29 13:16:30.202704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.692 [2024-11-29 13:16:30.203306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.692 [2024-11-29 13:16:30.203352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.692 [2024-11-29 13:16:30.203365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.692 [2024-11-29 13:16:30.203608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.203831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.203846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.692 [2024-11-29 13:16:30.203854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.692 [2024-11-29 13:16:30.203862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.692 [2024-11-29 13:16:30.216579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.692 [2024-11-29 13:16:30.217144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.692 [2024-11-29 13:16:30.217196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.692 [2024-11-29 13:16:30.217219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.692 [2024-11-29 13:16:30.217460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.692 [2024-11-29 13:16:30.217682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.692 [2024-11-29 13:16:30.217691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.217699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.217707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.230414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.693 [2024-11-29 13:16:30.231079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.693 [2024-11-29 13:16:30.231120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.693 [2024-11-29 13:16:30.231132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.693 [2024-11-29 13:16:30.231381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.693 [2024-11-29 13:16:30.231604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.693 [2024-11-29 13:16:30.231614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.231622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.231630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.244198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.693 [2024-11-29 13:16:30.244755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.693 [2024-11-29 13:16:30.244776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.693 [2024-11-29 13:16:30.244784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.693 [2024-11-29 13:16:30.245002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.693 [2024-11-29 13:16:30.245228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.693 [2024-11-29 13:16:30.245238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.245245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.245257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.258166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.693 [2024-11-29 13:16:30.258707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.693 [2024-11-29 13:16:30.258725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.693 [2024-11-29 13:16:30.258733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.693 [2024-11-29 13:16:30.258952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.693 [2024-11-29 13:16:30.259177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.693 [2024-11-29 13:16:30.259186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.259194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.259201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.272104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.693 [2024-11-29 13:16:30.272695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.693 [2024-11-29 13:16:30.272713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.693 [2024-11-29 13:16:30.272721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.693 [2024-11-29 13:16:30.272938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.693 [2024-11-29 13:16:30.273156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.693 [2024-11-29 13:16:30.273171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.273179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.273186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.286086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.693 [2024-11-29 13:16:30.286673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.693 [2024-11-29 13:16:30.286692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.693 [2024-11-29 13:16:30.286699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.693 [2024-11-29 13:16:30.286917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.693 [2024-11-29 13:16:30.287135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.693 [2024-11-29 13:16:30.287143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.287150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.287157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.300064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.693 [2024-11-29 13:16:30.300613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.693 [2024-11-29 13:16:30.300637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.693 [2024-11-29 13:16:30.300645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.693 [2024-11-29 13:16:30.300862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.693 [2024-11-29 13:16:30.301081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.693 [2024-11-29 13:16:30.301090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.301097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.301104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.314018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.693 [2024-11-29 13:16:30.314579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.693 [2024-11-29 13:16:30.314600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.693 [2024-11-29 13:16:30.314607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.693 [2024-11-29 13:16:30.314826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.693 [2024-11-29 13:16:30.315044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.693 [2024-11-29 13:16:30.315051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.315059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.315066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.327798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.693 [2024-11-29 13:16:30.328352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.693 [2024-11-29 13:16:30.328374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.693 [2024-11-29 13:16:30.328382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.693 [2024-11-29 13:16:30.328602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.693 [2024-11-29 13:16:30.328822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.693 [2024-11-29 13:16:30.328830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.328838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.328845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.341586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.693 [2024-11-29 13:16:30.342172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.693 [2024-11-29 13:16:30.342194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.693 [2024-11-29 13:16:30.342202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.693 [2024-11-29 13:16:30.342426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.693 [2024-11-29 13:16:30.342644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.693 [2024-11-29 13:16:30.342652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.693 [2024-11-29 13:16:30.342659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.693 [2024-11-29 13:16:30.342666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.693 [2024-11-29 13:16:30.355387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.694 [2024-11-29 13:16:30.355928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.694 [2024-11-29 13:16:30.355948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.694 [2024-11-29 13:16:30.355957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.694 [2024-11-29 13:16:30.356184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.694 [2024-11-29 13:16:30.356405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.694 [2024-11-29 13:16:30.356414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.694 [2024-11-29 13:16:30.356422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.694 [2024-11-29 13:16:30.356430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.956 [2024-11-29 13:16:30.369367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.956 [2024-11-29 13:16:30.369907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.956 [2024-11-29 13:16:30.369926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.956 [2024-11-29 13:16:30.369934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.956 [2024-11-29 13:16:30.370153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.956 [2024-11-29 13:16:30.370383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.956 [2024-11-29 13:16:30.370391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.956 [2024-11-29 13:16:30.370399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.956 [2024-11-29 13:16:30.370406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.956 [2024-11-29 13:16:30.383351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.956 [2024-11-29 13:16:30.383928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.956 [2024-11-29 13:16:30.383949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.956 [2024-11-29 13:16:30.383957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.956 [2024-11-29 13:16:30.384182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.956 [2024-11-29 13:16:30.384402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.384424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.384432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.384439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 [2024-11-29 13:16:30.397170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.397730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.397752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.397761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.957 [2024-11-29 13:16:30.397979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.957 [2024-11-29 13:16:30.398207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.398218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.398225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.398233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 [2024-11-29 13:16:30.410977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.411558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.411582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.411590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.957 [2024-11-29 13:16:30.411811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.957 [2024-11-29 13:16:30.412031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.412040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.412047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.412055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 [2024-11-29 13:16:30.424826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.425568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.425631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.425644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.957 [2024-11-29 13:16:30.425898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.957 [2024-11-29 13:16:30.426124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.426134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.426142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.426152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 9626.67 IOPS, 37.60 MiB/s [2024-11-29T12:16:30.637Z] [2024-11-29 13:16:30.438717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.439367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.439398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.439407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.957 [2024-11-29 13:16:30.439630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.957 [2024-11-29 13:16:30.439851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.439860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.439867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.439875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 [2024-11-29 13:16:30.452587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.453205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.453231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.453239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.957 [2024-11-29 13:16:30.453459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.957 [2024-11-29 13:16:30.453679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.453689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.453697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.453704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 [2024-11-29 13:16:30.466425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.466985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.467008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.467017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.957 [2024-11-29 13:16:30.467243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.957 [2024-11-29 13:16:30.467464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.467474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.467482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.467489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 [2024-11-29 13:16:30.480468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.481063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.481098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.481106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.957 [2024-11-29 13:16:30.481335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.957 [2024-11-29 13:16:30.481556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.481565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.481572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.481580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 [2024-11-29 13:16:30.494277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.494846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.494870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.494878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.957 [2024-11-29 13:16:30.495098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.957 [2024-11-29 13:16:30.495325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.495336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.495343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.495351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 [2024-11-29 13:16:30.508244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.508919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.508981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.508993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.957 [2024-11-29 13:16:30.509259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.957 [2024-11-29 13:16:30.509486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.957 [2024-11-29 13:16:30.509496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.957 [2024-11-29 13:16:30.509504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.957 [2024-11-29 13:16:30.509514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.957 [2024-11-29 13:16:30.522038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.957 [2024-11-29 13:16:30.522654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.957 [2024-11-29 13:16:30.522717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.957 [2024-11-29 13:16:30.522730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.958 [2024-11-29 13:16:30.522991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.958 [2024-11-29 13:16:30.523236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.958 [2024-11-29 13:16:30.523246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.958 [2024-11-29 13:16:30.523255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.958 [2024-11-29 13:16:30.523264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.958 [2024-11-29 13:16:30.536016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.958 [2024-11-29 13:16:30.536759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.958 [2024-11-29 13:16:30.536823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.958 [2024-11-29 13:16:30.536836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.958 [2024-11-29 13:16:30.537091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.958 [2024-11-29 13:16:30.537336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.958 [2024-11-29 13:16:30.537346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.958 [2024-11-29 13:16:30.537355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.958 [2024-11-29 13:16:30.537364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.958 [2024-11-29 13:16:30.549895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.958 [2024-11-29 13:16:30.550541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.958 [2024-11-29 13:16:30.550605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.958 [2024-11-29 13:16:30.550618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.958 [2024-11-29 13:16:30.550872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.958 [2024-11-29 13:16:30.551100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.958 [2024-11-29 13:16:30.551109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.958 [2024-11-29 13:16:30.551118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.958 [2024-11-29 13:16:30.551127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.958 [2024-11-29 13:16:30.563872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.958 [2024-11-29 13:16:30.564556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.958 [2024-11-29 13:16:30.564619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.958 [2024-11-29 13:16:30.564631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.958 [2024-11-29 13:16:30.564885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.958 [2024-11-29 13:16:30.565111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.958 [2024-11-29 13:16:30.565128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.958 [2024-11-29 13:16:30.565136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.958 [2024-11-29 13:16:30.565146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.958 [2024-11-29 13:16:30.577688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.958 [2024-11-29 13:16:30.578418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.958 [2024-11-29 13:16:30.578481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.958 [2024-11-29 13:16:30.578494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.958 [2024-11-29 13:16:30.578748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.958 [2024-11-29 13:16:30.578975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.958 [2024-11-29 13:16:30.578984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.958 [2024-11-29 13:16:30.578992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.958 [2024-11-29 13:16:30.579001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.958 [2024-11-29 13:16:30.591561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.958 [2024-11-29 13:16:30.592250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.958 [2024-11-29 13:16:30.592315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.958 [2024-11-29 13:16:30.592329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.958 [2024-11-29 13:16:30.592584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.958 [2024-11-29 13:16:30.592811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.958 [2024-11-29 13:16:30.592822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.958 [2024-11-29 13:16:30.592830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.958 [2024-11-29 13:16:30.592840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.958 [2024-11-29 13:16:30.605349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.958 [2024-11-29 13:16:30.605978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.958 [2024-11-29 13:16:30.606007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.958 [2024-11-29 13:16:30.606015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.958 [2024-11-29 13:16:30.606248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.958 [2024-11-29 13:16:30.606470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.958 [2024-11-29 13:16:30.606480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.958 [2024-11-29 13:16:30.606488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.958 [2024-11-29 13:16:30.606503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.958 [2024-11-29 13:16:30.619235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.958 [2024-11-29 13:16:30.619801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.958 [2024-11-29 13:16:30.619826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.958 [2024-11-29 13:16:30.619834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:27.958 [2024-11-29 13:16:30.620054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:27.958 [2024-11-29 13:16:30.620285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:27.958 [2024-11-29 13:16:30.620296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:27.958 [2024-11-29 13:16:30.620304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:27.958 [2024-11-29 13:16:30.620311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:27.958 [2024-11-29 13:16:30.633028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:27.958 [2024-11-29 13:16:30.633693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:27.958 [2024-11-29 13:16:30.633755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:27.958 [2024-11-29 13:16:30.633768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.221 [2024-11-29 13:16:30.634021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.221 [2024-11-29 13:16:30.634272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.221 [2024-11-29 13:16:30.634283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.221 [2024-11-29 13:16:30.634292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.221 [2024-11-29 13:16:30.634301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.221 [2024-11-29 13:16:30.646851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.647456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.647487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.647497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.647720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.647940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.647951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.647959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.222 [2024-11-29 13:16:30.647966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.222 [2024-11-29 13:16:30.660698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.661364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.661435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.661448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.661702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.661928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.661938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.661946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.222 [2024-11-29 13:16:30.661955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.222 [2024-11-29 13:16:30.674486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.675124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.675152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.675173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.675395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.675617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.675625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.675633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.222 [2024-11-29 13:16:30.675641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.222 [2024-11-29 13:16:30.688294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.689017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.689078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.689091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.689363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.689591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.689601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.689610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.222 [2024-11-29 13:16:30.689619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.222 [2024-11-29 13:16:30.702134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.702830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.702893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.702905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.703184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.703411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.703421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.703430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.222 [2024-11-29 13:16:30.703440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.222 [2024-11-29 13:16:30.715973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.716742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.716806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.716819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.717073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.717315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.717326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.717334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.222 [2024-11-29 13:16:30.717344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.222 [2024-11-29 13:16:30.729871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.730568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.730630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.730642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.730897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.731124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.731133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.731141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.222 [2024-11-29 13:16:30.731151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.222 [2024-11-29 13:16:30.743704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.744285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.744315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.744325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.744546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.744768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.744785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.744794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.222 [2024-11-29 13:16:30.744802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.222 [2024-11-29 13:16:30.757536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.758109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.758184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.758198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.758452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.758678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.758688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.758697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.222 [2024-11-29 13:16:30.758706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.222 [2024-11-29 13:16:30.771428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.222 [2024-11-29 13:16:30.772118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.222 [2024-11-29 13:16:30.772192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.222 [2024-11-29 13:16:30.772206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.222 [2024-11-29 13:16:30.772461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.222 [2024-11-29 13:16:30.772688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.222 [2024-11-29 13:16:30.772697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.222 [2024-11-29 13:16:30.772706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.772716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.223 [2024-11-29 13:16:30.785241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.223 [2024-11-29 13:16:30.785970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.223 [2024-11-29 13:16:30.786033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.223 [2024-11-29 13:16:30.786046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.223 [2024-11-29 13:16:30.786317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.223 [2024-11-29 13:16:30.786546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.223 [2024-11-29 13:16:30.786555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.223 [2024-11-29 13:16:30.786564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.786580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.223 [2024-11-29 13:16:30.799104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.223 [2024-11-29 13:16:30.799816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.223 [2024-11-29 13:16:30.799878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.223 [2024-11-29 13:16:30.799891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.223 [2024-11-29 13:16:30.800146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.223 [2024-11-29 13:16:30.800389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.223 [2024-11-29 13:16:30.800399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.223 [2024-11-29 13:16:30.800408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.800417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.223 [2024-11-29 13:16:30.812935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.223 [2024-11-29 13:16:30.813685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.223 [2024-11-29 13:16:30.813748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.223 [2024-11-29 13:16:30.813761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.223 [2024-11-29 13:16:30.814015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.223 [2024-11-29 13:16:30.814269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.223 [2024-11-29 13:16:30.814281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.223 [2024-11-29 13:16:30.814289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.814298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.223 [2024-11-29 13:16:30.826820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.223 [2024-11-29 13:16:30.827570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.223 [2024-11-29 13:16:30.827633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.223 [2024-11-29 13:16:30.827646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.223 [2024-11-29 13:16:30.827900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.223 [2024-11-29 13:16:30.828126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.223 [2024-11-29 13:16:30.828136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.223 [2024-11-29 13:16:30.828146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.828155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.223 [2024-11-29 13:16:30.840713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.223 [2024-11-29 13:16:30.841451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.223 [2024-11-29 13:16:30.841522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.223 [2024-11-29 13:16:30.841536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.223 [2024-11-29 13:16:30.841791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.223 [2024-11-29 13:16:30.842017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.223 [2024-11-29 13:16:30.842027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.223 [2024-11-29 13:16:30.842037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.842047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.223 [2024-11-29 13:16:30.854599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.223 [2024-11-29 13:16:30.855200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.223 [2024-11-29 13:16:30.855266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.223 [2024-11-29 13:16:30.855279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.223 [2024-11-29 13:16:30.855534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.223 [2024-11-29 13:16:30.855760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.223 [2024-11-29 13:16:30.855771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.223 [2024-11-29 13:16:30.855779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.855788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.223 [2024-11-29 13:16:30.867343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.223 [2024-11-29 13:16:30.867939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.223 [2024-11-29 13:16:30.867996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.223 [2024-11-29 13:16:30.868005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.223 [2024-11-29 13:16:30.868207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.223 [2024-11-29 13:16:30.868366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.223 [2024-11-29 13:16:30.868373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.223 [2024-11-29 13:16:30.868379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.868386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.223 [2024-11-29 13:16:30.880009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.223 [2024-11-29 13:16:30.880564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.223 [2024-11-29 13:16:30.880588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.223 [2024-11-29 13:16:30.880595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.223 [2024-11-29 13:16:30.880755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.223 [2024-11-29 13:16:30.880907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.223 [2024-11-29 13:16:30.880913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.223 [2024-11-29 13:16:30.880919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.880924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.223 [2024-11-29 13:16:30.892636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.223 [2024-11-29 13:16:30.893117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.223 [2024-11-29 13:16:30.893135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.223 [2024-11-29 13:16:30.893141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.223 [2024-11-29 13:16:30.893297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.223 [2024-11-29 13:16:30.893449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.223 [2024-11-29 13:16:30.893455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.223 [2024-11-29 13:16:30.893461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.223 [2024-11-29 13:16:30.893467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.486 [2024-11-29 13:16:30.905341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.486 [2024-11-29 13:16:30.905842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.486 [2024-11-29 13:16:30.905859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.486 [2024-11-29 13:16:30.905865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.486 [2024-11-29 13:16:30.906016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.486 [2024-11-29 13:16:30.906173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.486 [2024-11-29 13:16:30.906180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.486 [2024-11-29 13:16:30.906186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.486 [2024-11-29 13:16:30.906191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.486 [2024-11-29 13:16:30.918054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.486 [2024-11-29 13:16:30.918502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.486 [2024-11-29 13:16:30.918518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.486 [2024-11-29 13:16:30.918524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.486 [2024-11-29 13:16:30.918675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.486 [2024-11-29 13:16:30.918826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.486 [2024-11-29 13:16:30.918836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.486 [2024-11-29 13:16:30.918842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.486 [2024-11-29 13:16:30.918847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.486 [2024-11-29 13:16:30.930711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.486 [2024-11-29 13:16:30.931305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.486 [2024-11-29 13:16:30.931343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.486 [2024-11-29 13:16:30.931352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.486 [2024-11-29 13:16:30.931523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.486 [2024-11-29 13:16:30.931678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.486 [2024-11-29 13:16:30.931684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.486 [2024-11-29 13:16:30.931690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.486 [2024-11-29 13:16:30.931696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.486 [2024-11-29 13:16:30.943414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.486 [2024-11-29 13:16:30.943958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.486 [2024-11-29 13:16:30.943993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.486 [2024-11-29 13:16:30.944001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.486 [2024-11-29 13:16:30.944175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.486 [2024-11-29 13:16:30.944330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.486 [2024-11-29 13:16:30.944337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.486 [2024-11-29 13:16:30.944342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.486 [2024-11-29 13:16:30.944348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.486 [2024-11-29 13:16:30.956053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.486 [2024-11-29 13:16:30.956665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.486 [2024-11-29 13:16:30.956700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.486 [2024-11-29 13:16:30.956708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.486 [2024-11-29 13:16:30.956876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.486 [2024-11-29 13:16:30.957030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.486 [2024-11-29 13:16:30.957036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.486 [2024-11-29 13:16:30.957042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.486 [2024-11-29 13:16:30.957052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.486 [2024-11-29 13:16:30.968749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.486 [2024-11-29 13:16:30.969250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.486 [2024-11-29 13:16:30.969284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.486 [2024-11-29 13:16:30.969293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.486 [2024-11-29 13:16:30.969463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.486 [2024-11-29 13:16:30.969615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.486 [2024-11-29 13:16:30.969622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.486 [2024-11-29 13:16:30.969627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.486 [2024-11-29 13:16:30.969633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.486 [2024-11-29 13:16:30.981468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.486 [2024-11-29 13:16:30.981950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.486 [2024-11-29 13:16:30.981966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.486 [2024-11-29 13:16:30.981971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.486 [2024-11-29 13:16:30.982121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.486 [2024-11-29 13:16:30.982278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.486 [2024-11-29 13:16:30.982284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.486 [2024-11-29 13:16:30.982289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.486 [2024-11-29 13:16:30.982294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.486 [2024-11-29 13:16:30.994112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.486 [2024-11-29 13:16:30.994693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.486 [2024-11-29 13:16:30.994725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.486 [2024-11-29 13:16:30.994733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:30.994899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:30.995052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:30.995059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:30.995064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:30.995070] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.006783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.007312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.007332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.487 [2024-11-29 13:16:31.007338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:31.007488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:31.007638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:31.007644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:31.007649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:31.007654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.019481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.019965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.019978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.487 [2024-11-29 13:16:31.019983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:31.020133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:31.020290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:31.020297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:31.020302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:31.020307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.032126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.032613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.032626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.487 [2024-11-29 13:16:31.032631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:31.032781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:31.032930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:31.032936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:31.032941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:31.032946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.044775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.045276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.045307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.487 [2024-11-29 13:16:31.045315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:31.045489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:31.045642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:31.045649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:31.045654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:31.045660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.057489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.058081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.058112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.487 [2024-11-29 13:16:31.058120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:31.058293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:31.058447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:31.058454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:31.058460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:31.058465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.070140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.070726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.070756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.487 [2024-11-29 13:16:31.070765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:31.070930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:31.071083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:31.071090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:31.071095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:31.071101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.082791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.083400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.083431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.487 [2024-11-29 13:16:31.083440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:31.083606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:31.083759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:31.083770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:31.083775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:31.083781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.095536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.096116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.096146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.487 [2024-11-29 13:16:31.096155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:31.096329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:31.096482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:31.096488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:31.096494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:31.096500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.108185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.108779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.108809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.487 [2024-11-29 13:16:31.108818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.487 [2024-11-29 13:16:31.108985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.487 [2024-11-29 13:16:31.109138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.487 [2024-11-29 13:16:31.109145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.487 [2024-11-29 13:16:31.109150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.487 [2024-11-29 13:16:31.109156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.487 [2024-11-29 13:16:31.120852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.487 [2024-11-29 13:16:31.121416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.487 [2024-11-29 13:16:31.121447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.488 [2024-11-29 13:16:31.121456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.488 [2024-11-29 13:16:31.121621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.488 [2024-11-29 13:16:31.121775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.488 [2024-11-29 13:16:31.121781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.488 [2024-11-29 13:16:31.121787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.488 [2024-11-29 13:16:31.121797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.488 [2024-11-29 13:16:31.133492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.488 [2024-11-29 13:16:31.134085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.488 [2024-11-29 13:16:31.134115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.488 [2024-11-29 13:16:31.134124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.488 [2024-11-29 13:16:31.134298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.488 [2024-11-29 13:16:31.134451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.488 [2024-11-29 13:16:31.134458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.488 [2024-11-29 13:16:31.134463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.488 [2024-11-29 13:16:31.134469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.488 [2024-11-29 13:16:31.146152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.488 [2024-11-29 13:16:31.146753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.488 [2024-11-29 13:16:31.146783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.488 [2024-11-29 13:16:31.146792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.488 [2024-11-29 13:16:31.146957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.488 [2024-11-29 13:16:31.147110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.488 [2024-11-29 13:16:31.147117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.488 [2024-11-29 13:16:31.147122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.488 [2024-11-29 13:16:31.147128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.488 [2024-11-29 13:16:31.158821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.488 [2024-11-29 13:16:31.159418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.488 [2024-11-29 13:16:31.159448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.488 [2024-11-29 13:16:31.159457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.488 [2024-11-29 13:16:31.159622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.488 [2024-11-29 13:16:31.159775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.488 [2024-11-29 13:16:31.159781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.488 [2024-11-29 13:16:31.159787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.488 [2024-11-29 13:16:31.159792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.750 [2024-11-29 13:16:31.171480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.750 [2024-11-29 13:16:31.172029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.751 [2024-11-29 13:16:31.172062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.751 [2024-11-29 13:16:31.172071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.751 [2024-11-29 13:16:31.172246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.751 [2024-11-29 13:16:31.172400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.751 [2024-11-29 13:16:31.172406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.751 [2024-11-29 13:16:31.172412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.751 [2024-11-29 13:16:31.172417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.751 [2024-11-29 13:16:31.184092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.751 [2024-11-29 13:16:31.184656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.751 [2024-11-29 13:16:31.184686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.751 [2024-11-29 13:16:31.184694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.751 [2024-11-29 13:16:31.184860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.751 [2024-11-29 13:16:31.185012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.751 [2024-11-29 13:16:31.185019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.751 [2024-11-29 13:16:31.185024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.751 [2024-11-29 13:16:31.185030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.751 [2024-11-29 13:16:31.196720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.751 [2024-11-29 13:16:31.197209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.751 [2024-11-29 13:16:31.197239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.751 [2024-11-29 13:16:31.197248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.751 [2024-11-29 13:16:31.197415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.751 [2024-11-29 13:16:31.197568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.751 [2024-11-29 13:16:31.197574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.751 [2024-11-29 13:16:31.197580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.751 [2024-11-29 13:16:31.197585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.751 [2024-11-29 13:16:31.209409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.751 [2024-11-29 13:16:31.210021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.751 [2024-11-29 13:16:31.210052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.751 [2024-11-29 13:16:31.210060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.751 [2024-11-29 13:16:31.210237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.751 [2024-11-29 13:16:31.210391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.751 [2024-11-29 13:16:31.210397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.751 [2024-11-29 13:16:31.210402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.751 [2024-11-29 13:16:31.210408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.751 [2024-11-29 13:16:31.222096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.751 [2024-11-29 13:16:31.222626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.751 [2024-11-29 13:16:31.222657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.751 [2024-11-29 13:16:31.222665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.751 [2024-11-29 13:16:31.222831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.751 [2024-11-29 13:16:31.222984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.751 [2024-11-29 13:16:31.222990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.751 [2024-11-29 13:16:31.222995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.751 [2024-11-29 13:16:31.223001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.751 [2024-11-29 13:16:31.234692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.751 [2024-11-29 13:16:31.235169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.751 [2024-11-29 13:16:31.235184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.751 [2024-11-29 13:16:31.235190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.751 [2024-11-29 13:16:31.235340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.751 [2024-11-29 13:16:31.235491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.751 [2024-11-29 13:16:31.235496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.751 [2024-11-29 13:16:31.235501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.751 [2024-11-29 13:16:31.235506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.751 [2024-11-29 13:16:31.247332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.751 [2024-11-29 13:16:31.247713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.751 [2024-11-29 13:16:31.247726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.751 [2024-11-29 13:16:31.247731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.751 [2024-11-29 13:16:31.247881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.751 [2024-11-29 13:16:31.248031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.751 [2024-11-29 13:16:31.248040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.751 [2024-11-29 13:16:31.248045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.751 [2024-11-29 13:16:31.248050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.751 [2024-11-29 13:16:31.260009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.751 [2024-11-29 13:16:31.260500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.751 [2024-11-29 13:16:31.260512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.751 [2024-11-29 13:16:31.260517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.751 [2024-11-29 13:16:31.260667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.751 [2024-11-29 13:16:31.260816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.751 [2024-11-29 13:16:31.260822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.751 [2024-11-29 13:16:31.260827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.751 [2024-11-29 13:16:31.260832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.751 [2024-11-29 13:16:31.272640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.751 [2024-11-29 13:16:31.273203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.751 [2024-11-29 13:16:31.273233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.751 [2024-11-29 13:16:31.273242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.751 [2024-11-29 13:16:31.273410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.751 [2024-11-29 13:16:31.273563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.751 [2024-11-29 13:16:31.273569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.751 [2024-11-29 13:16:31.273575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.751 [2024-11-29 13:16:31.273581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.751 [2024-11-29 13:16:31.285261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.751 [2024-11-29 13:16:31.285821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.285851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.285860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.286025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.286186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.286193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.286198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.286207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.752 [2024-11-29 13:16:31.297878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.752 [2024-11-29 13:16:31.298489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.298520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.298528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.298694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.298846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.298853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.298858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.298864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.752 [2024-11-29 13:16:31.310554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.752 [2024-11-29 13:16:31.311029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.311043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.311049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.311204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.311354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.311360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.311365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.311370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.752 [2024-11-29 13:16:31.323250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.752 [2024-11-29 13:16:31.323791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.323821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.323830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.323995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.324148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.324154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.324167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.324173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.752 [2024-11-29 13:16:31.335848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.752 [2024-11-29 13:16:31.336478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.336512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.336520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.336685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.336838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.336844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.336850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.336856] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.752 [2024-11-29 13:16:31.348546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.752 [2024-11-29 13:16:31.349033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.349063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.349072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.349245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.349399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.349405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.349410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.349416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.752 [2024-11-29 13:16:31.361235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.752 [2024-11-29 13:16:31.361812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.361842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.361851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.362017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.362179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.362186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.362191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.362197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.752 [2024-11-29 13:16:31.373868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.752 [2024-11-29 13:16:31.374380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.374410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.374419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.374588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.374741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.374748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.374753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.374759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.752 [2024-11-29 13:16:31.386592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.752 [2024-11-29 13:16:31.387171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.387200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.387209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.387377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.387530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.387536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.387541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.387547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.752 [2024-11-29 13:16:31.399229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.752 [2024-11-29 13:16:31.399796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.752 [2024-11-29 13:16:31.399826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.752 [2024-11-29 13:16:31.399835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.752 [2024-11-29 13:16:31.400001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.752 [2024-11-29 13:16:31.400154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.752 [2024-11-29 13:16:31.400168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.752 [2024-11-29 13:16:31.400174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.752 [2024-11-29 13:16:31.400179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.753 [2024-11-29 13:16:31.411858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.753 [2024-11-29 13:16:31.412443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.753 [2024-11-29 13:16:31.412473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.753 [2024-11-29 13:16:31.412482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.753 [2024-11-29 13:16:31.412647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.753 [2024-11-29 13:16:31.412800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.753 [2024-11-29 13:16:31.412810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.753 [2024-11-29 13:16:31.412815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.753 [2024-11-29 13:16:31.412821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:28.753 [2024-11-29 13:16:31.424514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:28.753 [2024-11-29 13:16:31.425083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:28.753 [2024-11-29 13:16:31.425113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:28.753 [2024-11-29 13:16:31.425122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:28.753 [2024-11-29 13:16:31.425299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:28.753 [2024-11-29 13:16:31.425453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:28.753 [2024-11-29 13:16:31.425459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:28.753 [2024-11-29 13:16:31.425465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:28.753 [2024-11-29 13:16:31.425470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.015 7220.00 IOPS, 28.20 MiB/s [2024-11-29T12:16:31.695Z] [2024-11-29 13:16:31.438289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.015 [2024-11-29 13:16:31.438768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.015 [2024-11-29 13:16:31.438798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.015 [2024-11-29 13:16:31.438807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.015 [2024-11-29 13:16:31.438972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.015 [2024-11-29 13:16:31.439124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.015 [2024-11-29 13:16:31.439131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.015 [2024-11-29 13:16:31.439136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.015 [2024-11-29 13:16:31.439142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.015 [2024-11-29 13:16:31.450974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.015 [2024-11-29 13:16:31.451561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.015 [2024-11-29 13:16:31.451592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.015 [2024-11-29 13:16:31.451600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.015 [2024-11-29 13:16:31.451766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.015 [2024-11-29 13:16:31.451919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.015 [2024-11-29 13:16:31.451925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.015 [2024-11-29 13:16:31.451930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.015 [2024-11-29 13:16:31.451940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.015 [2024-11-29 13:16:31.463629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.015 [2024-11-29 13:16:31.464191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.015 [2024-11-29 13:16:31.464222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.015 [2024-11-29 13:16:31.464230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.015 [2024-11-29 13:16:31.464398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.015 [2024-11-29 13:16:31.464551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.015 [2024-11-29 13:16:31.464557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.015 [2024-11-29 13:16:31.464563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.015 [2024-11-29 13:16:31.464569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.015 [2024-11-29 13:16:31.476262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.015 [2024-11-29 13:16:31.476864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.015 [2024-11-29 13:16:31.476893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.476901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.477067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.477226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.477234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.477239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.477245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.488927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.489518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.016 [2024-11-29 13:16:31.489548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.489557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.489722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.489875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.489881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.489887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.489893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.501574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.502077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.016 [2024-11-29 13:16:31.502092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.502097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.502251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.502402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.502407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.502413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.502417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.514238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.514712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.016 [2024-11-29 13:16:31.514726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.514731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.514881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.515031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.515036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.515041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.515046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.526875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.527330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.016 [2024-11-29 13:16:31.527343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.527348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.527498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.527648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.527654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.527659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.527664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.539488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.539971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.016 [2024-11-29 13:16:31.539983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.539989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.540143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.540298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.540304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.540309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.540313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.552126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.552595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.016 [2024-11-29 13:16:31.552625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.552634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.552799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.552952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.552958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.552964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.552969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.564816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.565433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.016 [2024-11-29 13:16:31.565463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.565472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.565638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.565790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.565796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.565802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.565809] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.577489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.578030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.016 [2024-11-29 13:16:31.578060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.578069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.578241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.578395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.578405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.578410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.578416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.590098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.590657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.016 [2024-11-29 13:16:31.590687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.016 [2024-11-29 13:16:31.590696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.016 [2024-11-29 13:16:31.590861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.016 [2024-11-29 13:16:31.591015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.016 [2024-11-29 13:16:31.591021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.016 [2024-11-29 13:16:31.591027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.016 [2024-11-29 13:16:31.591033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.016 [2024-11-29 13:16:31.602730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.016 [2024-11-29 13:16:31.603173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.017 [2024-11-29 13:16:31.603189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.017 [2024-11-29 13:16:31.603194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.017 [2024-11-29 13:16:31.603344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.017 [2024-11-29 13:16:31.603494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.017 [2024-11-29 13:16:31.603500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.017 [2024-11-29 13:16:31.603505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.017 [2024-11-29 13:16:31.603510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.017 [2024-11-29 13:16:31.615388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.017 [2024-11-29 13:16:31.615727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.017 [2024-11-29 13:16:31.615741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.017 [2024-11-29 13:16:31.615746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.017 [2024-11-29 13:16:31.615896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.017 [2024-11-29 13:16:31.616046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.017 [2024-11-29 13:16:31.616052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.017 [2024-11-29 13:16:31.616058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.017 [2024-11-29 13:16:31.616067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.017 [2024-11-29 13:16:31.628037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.017 [2024-11-29 13:16:31.628526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.017 [2024-11-29 13:16:31.628539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.017 [2024-11-29 13:16:31.628545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.017 [2024-11-29 13:16:31.628694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.017 [2024-11-29 13:16:31.628844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.017 [2024-11-29 13:16:31.628850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.017 [2024-11-29 13:16:31.628855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.017 [2024-11-29 13:16:31.628859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.017 [2024-11-29 13:16:31.640699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.017 [2024-11-29 13:16:31.641141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.017 [2024-11-29 13:16:31.641153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.017 [2024-11-29 13:16:31.641162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.017 [2024-11-29 13:16:31.641312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.017 [2024-11-29 13:16:31.641462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.017 [2024-11-29 13:16:31.641467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.017 [2024-11-29 13:16:31.641473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.017 [2024-11-29 13:16:31.641478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.017 [2024-11-29 13:16:31.653327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.017 [2024-11-29 13:16:31.653860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.017 [2024-11-29 13:16:31.653890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.017 [2024-11-29 13:16:31.653899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.017 [2024-11-29 13:16:31.654064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.017 [2024-11-29 13:16:31.654223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.017 [2024-11-29 13:16:31.654231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.017 [2024-11-29 13:16:31.654236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.017 [2024-11-29 13:16:31.654242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.017 [2024-11-29 13:16:31.665923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.017 [2024-11-29 13:16:31.666489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.017 [2024-11-29 13:16:31.666520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.017 [2024-11-29 13:16:31.666529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.017 [2024-11-29 13:16:31.666694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.017 [2024-11-29 13:16:31.666847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.017 [2024-11-29 13:16:31.666853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.017 [2024-11-29 13:16:31.666858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.017 [2024-11-29 13:16:31.666865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.017 [2024-11-29 13:16:31.678556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.017 [2024-11-29 13:16:31.679036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.017 [2024-11-29 13:16:31.679050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.017 [2024-11-29 13:16:31.679056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.017 [2024-11-29 13:16:31.679210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.017 [2024-11-29 13:16:31.679361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.017 [2024-11-29 13:16:31.679366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.017 [2024-11-29 13:16:31.679372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.017 [2024-11-29 13:16:31.679376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.017 [2024-11-29 13:16:31.691195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.017 [2024-11-29 13:16:31.691694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.017 [2024-11-29 13:16:31.691706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.017 [2024-11-29 13:16:31.691712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.017 [2024-11-29 13:16:31.691861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.017 [2024-11-29 13:16:31.692010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.017 [2024-11-29 13:16:31.692016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.017 [2024-11-29 13:16:31.692021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.017 [2024-11-29 13:16:31.692026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.280 [2024-11-29 13:16:31.703840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.280 [2024-11-29 13:16:31.704485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.280 [2024-11-29 13:16:31.704515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.280 [2024-11-29 13:16:31.704524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.280 [2024-11-29 13:16:31.704693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.280 [2024-11-29 13:16:31.704846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.280 [2024-11-29 13:16:31.704852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.280 [2024-11-29 13:16:31.704858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.280 [2024-11-29 13:16:31.704864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.280 [2024-11-29 13:16:31.716448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.280 [2024-11-29 13:16:31.716938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.280 [2024-11-29 13:16:31.716953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.280 [2024-11-29 13:16:31.716958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.280 [2024-11-29 13:16:31.717108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.280 [2024-11-29 13:16:31.717263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.280 [2024-11-29 13:16:31.717270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.280 [2024-11-29 13:16:31.717275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.280 [2024-11-29 13:16:31.717280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.280 [2024-11-29 13:16:31.729105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.280 [2024-11-29 13:16:31.729783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.280 [2024-11-29 13:16:31.729813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.280 [2024-11-29 13:16:31.729822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.280 [2024-11-29 13:16:31.729987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.280 [2024-11-29 13:16:31.730140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.280 [2024-11-29 13:16:31.730147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.280 [2024-11-29 13:16:31.730152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.280 [2024-11-29 13:16:31.730164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.280 [2024-11-29 13:16:31.741706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.280 [2024-11-29 13:16:31.742094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.280 [2024-11-29 13:16:31.742109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.280 [2024-11-29 13:16:31.742115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.280 [2024-11-29 13:16:31.742269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.280 [2024-11-29 13:16:31.742420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.280 [2024-11-29 13:16:31.742430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.280 [2024-11-29 13:16:31.742435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.280 [2024-11-29 13:16:31.742440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.280 [2024-11-29 13:16:31.754408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.280 [2024-11-29 13:16:31.754753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.280 [2024-11-29 13:16:31.754766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.280 [2024-11-29 13:16:31.754772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.280 [2024-11-29 13:16:31.754922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.280 [2024-11-29 13:16:31.755071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.280 [2024-11-29 13:16:31.755076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.280 [2024-11-29 13:16:31.755081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.280 [2024-11-29 13:16:31.755086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.767041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.767493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.767506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.767511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.767660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.767810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.767816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.767821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.281 [2024-11-29 13:16:31.767826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.779640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.780078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.780090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.780095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.780248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.780398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.780404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.780409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.281 [2024-11-29 13:16:31.780416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.792240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.792859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.792888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.792897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.793063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.793221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.793228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.793234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.281 [2024-11-29 13:16:31.793240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.804916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.805481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.805512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.805521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.805689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.805842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.805848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.805854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.281 [2024-11-29 13:16:31.805859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.817539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.818108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.818138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.818147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.818321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.818475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.818481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.818487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.281 [2024-11-29 13:16:31.818493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.830189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.830834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.830864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.830873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.831041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.831201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.831208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.831213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.281 [2024-11-29 13:16:31.831219] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.842910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.843512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.843542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.843550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.843716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.843868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.843875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.843880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.281 [2024-11-29 13:16:31.843886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.855572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.856020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.856035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.856041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.856195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.856346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.856351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.856357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.281 [2024-11-29 13:16:31.856361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.868196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.868657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.868669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.868674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.868828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.868977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.868983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.868989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.281 [2024-11-29 13:16:31.868994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.281 [2024-11-29 13:16:31.880820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.281 [2024-11-29 13:16:31.881378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.281 [2024-11-29 13:16:31.881408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.281 [2024-11-29 13:16:31.881417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.281 [2024-11-29 13:16:31.881583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.281 [2024-11-29 13:16:31.881736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.281 [2024-11-29 13:16:31.881742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.281 [2024-11-29 13:16:31.881748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.282 [2024-11-29 13:16:31.881754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.282 [2024-11-29 13:16:31.893452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.282 [2024-11-29 13:16:31.894012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.282 [2024-11-29 13:16:31.894043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.282 [2024-11-29 13:16:31.894052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.282 [2024-11-29 13:16:31.894224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.282 [2024-11-29 13:16:31.894378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.282 [2024-11-29 13:16:31.894384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.282 [2024-11-29 13:16:31.894389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.282 [2024-11-29 13:16:31.894395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.282 [2024-11-29 13:16:31.906075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.282 [2024-11-29 13:16:31.906659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.282 [2024-11-29 13:16:31.906689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.282 [2024-11-29 13:16:31.906698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.282 [2024-11-29 13:16:31.906863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.282 [2024-11-29 13:16:31.907016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.282 [2024-11-29 13:16:31.907027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.282 [2024-11-29 13:16:31.907033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.282 [2024-11-29 13:16:31.907039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.282 [2024-11-29 13:16:31.918729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.282 [2024-11-29 13:16:31.919284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.282 [2024-11-29 13:16:31.919315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.282 [2024-11-29 13:16:31.919323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.282 [2024-11-29 13:16:31.919491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.282 [2024-11-29 13:16:31.919644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.282 [2024-11-29 13:16:31.919650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.282 [2024-11-29 13:16:31.919656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.282 [2024-11-29 13:16:31.919662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.282 [2024-11-29 13:16:31.931362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.282 [2024-11-29 13:16:31.931923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.282 [2024-11-29 13:16:31.931954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.282 [2024-11-29 13:16:31.931963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.282 [2024-11-29 13:16:31.932129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.282 [2024-11-29 13:16:31.932288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.282 [2024-11-29 13:16:31.932295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.282 [2024-11-29 13:16:31.932301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.282 [2024-11-29 13:16:31.932306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.282 [2024-11-29 13:16:31.943998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.282 [2024-11-29 13:16:31.944494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.282 [2024-11-29 13:16:31.944510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.282 [2024-11-29 13:16:31.944515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.282 [2024-11-29 13:16:31.944665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.282 [2024-11-29 13:16:31.944815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.282 [2024-11-29 13:16:31.944821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.282 [2024-11-29 13:16:31.944827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.282 [2024-11-29 13:16:31.944835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.282 [2024-11-29 13:16:31.956654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.282 [2024-11-29 13:16:31.957006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.282 [2024-11-29 13:16:31.957020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.282 [2024-11-29 13:16:31.957026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.282 [2024-11-29 13:16:31.957180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.545 [2024-11-29 13:16:31.957331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.545 [2024-11-29 13:16:31.957341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.545 [2024-11-29 13:16:31.957347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.545 [2024-11-29 13:16:31.957352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.545 [2024-11-29 13:16:31.969319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.545 [2024-11-29 13:16:31.969798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.545 [2024-11-29 13:16:31.969811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.545 [2024-11-29 13:16:31.969816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.545 [2024-11-29 13:16:31.969965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.545 [2024-11-29 13:16:31.970115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.545 [2024-11-29 13:16:31.970122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.545 [2024-11-29 13:16:31.970126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.545 [2024-11-29 13:16:31.970131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.545 [2024-11-29 13:16:31.981952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.545 [2024-11-29 13:16:31.982488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.545 [2024-11-29 13:16:31.982519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.545 [2024-11-29 13:16:31.982528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.545 [2024-11-29 13:16:31.982694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.545 [2024-11-29 13:16:31.982847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.545 [2024-11-29 13:16:31.982853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.545 [2024-11-29 13:16:31.982858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.545 [2024-11-29 13:16:31.982864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.545 [2024-11-29 13:16:31.994564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.545 [2024-11-29 13:16:31.995126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.545 [2024-11-29 13:16:31.995166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.545 [2024-11-29 13:16:31.995176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.545 [2024-11-29 13:16:31.995349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.545 [2024-11-29 13:16:31.995503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.545 [2024-11-29 13:16:31.995509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.545 [2024-11-29 13:16:31.995514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.545 [2024-11-29 13:16:31.995520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.545 [2024-11-29 13:16:32.007205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.545 [2024-11-29 13:16:32.007688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.545 [2024-11-29 13:16:32.007703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.545 [2024-11-29 13:16:32.007708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.545 [2024-11-29 13:16:32.007859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.545 [2024-11-29 13:16:32.008009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.545 [2024-11-29 13:16:32.008015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.545 [2024-11-29 13:16:32.008020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.545 [2024-11-29 13:16:32.008025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.545 [2024-11-29 13:16:32.019848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.545 [2024-11-29 13:16:32.020308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.545 [2024-11-29 13:16:32.020322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.545 [2024-11-29 13:16:32.020327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.545 [2024-11-29 13:16:32.020476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.545 [2024-11-29 13:16:32.020626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.545 [2024-11-29 13:16:32.020631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.545 [2024-11-29 13:16:32.020636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.545 [2024-11-29 13:16:32.020642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.545 [2024-11-29 13:16:32.032479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.545 [2024-11-29 13:16:32.032955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.545 [2024-11-29 13:16:32.032969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.545 [2024-11-29 13:16:32.032974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.545 [2024-11-29 13:16:32.033127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.545 [2024-11-29 13:16:32.033281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.545 [2024-11-29 13:16:32.033288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.545 [2024-11-29 13:16:32.033293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.545 [2024-11-29 13:16:32.033298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.545 [2024-11-29 13:16:32.045123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.545 [2024-11-29 13:16:32.045604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.545 [2024-11-29 13:16:32.045617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.545 [2024-11-29 13:16:32.045623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.545 [2024-11-29 13:16:32.045772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.045922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.045927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.045933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.045938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.057757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.058114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.058127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.058132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.546 [2024-11-29 13:16:32.058286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.058436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.058442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.058447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.058451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.070409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.070951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.070982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.070991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.546 [2024-11-29 13:16:32.071156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.071316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.071326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.071332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.071338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.083046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.083606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.083643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.083652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.546 [2024-11-29 13:16:32.083818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.083973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.083980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.083985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.083992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.095693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.096192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.096208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.096214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.546 [2024-11-29 13:16:32.096364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.096514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.096521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.096526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.096530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.108355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.108895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.108926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.108935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.546 [2024-11-29 13:16:32.109100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.109259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.109266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.109272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.109281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.120966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.121529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.121559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.121568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.546 [2024-11-29 13:16:32.121735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.121888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.121894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.121900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.121906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.133688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.134173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.134189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.134195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.546 [2024-11-29 13:16:32.134345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.134495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.134501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.134507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.134512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.146349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.146846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.146859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.146865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.546 [2024-11-29 13:16:32.147015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.147168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.147175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.147180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.147185] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.159016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.159477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.159493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.159499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.546 [2024-11-29 13:16:32.159649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.546 [2024-11-29 13:16:32.159798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.546 [2024-11-29 13:16:32.159804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.546 [2024-11-29 13:16:32.159810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.546 [2024-11-29 13:16:32.159815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.546 [2024-11-29 13:16:32.171643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.546 [2024-11-29 13:16:32.172075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.546 [2024-11-29 13:16:32.172087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.546 [2024-11-29 13:16:32.172093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.547 [2024-11-29 13:16:32.172247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.547 [2024-11-29 13:16:32.172397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.547 [2024-11-29 13:16:32.172403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.547 [2024-11-29 13:16:32.172408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.547 [2024-11-29 13:16:32.172413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.547 [2024-11-29 13:16:32.184234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.547 [2024-11-29 13:16:32.184686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.547 [2024-11-29 13:16:32.184698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.547 [2024-11-29 13:16:32.184704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.547 [2024-11-29 13:16:32.184853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.547 [2024-11-29 13:16:32.185003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.547 [2024-11-29 13:16:32.185009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.547 [2024-11-29 13:16:32.185014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.547 [2024-11-29 13:16:32.185019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.547 [2024-11-29 13:16:32.196843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.547 [2024-11-29 13:16:32.197299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.547 [2024-11-29 13:16:32.197311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.547 [2024-11-29 13:16:32.197317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.547 [2024-11-29 13:16:32.197469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.547 [2024-11-29 13:16:32.197618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.547 [2024-11-29 13:16:32.197624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.547 [2024-11-29 13:16:32.197629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.547 [2024-11-29 13:16:32.197634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.547 [2024-11-29 13:16:32.209454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.547 [2024-11-29 13:16:32.210025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.547 [2024-11-29 13:16:32.210054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.547 [2024-11-29 13:16:32.210063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.547 [2024-11-29 13:16:32.210241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.547 [2024-11-29 13:16:32.210395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.547 [2024-11-29 13:16:32.210402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.547 [2024-11-29 13:16:32.210407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.547 [2024-11-29 13:16:32.210413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.547 [2024-11-29 13:16:32.222141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.810 [2024-11-29 13:16:32.222635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.810 [2024-11-29 13:16:32.222651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.810 [2024-11-29 13:16:32.222657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.810 [2024-11-29 13:16:32.222807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.810 [2024-11-29 13:16:32.222956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.810 [2024-11-29 13:16:32.222962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.810 [2024-11-29 13:16:32.222968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.810 [2024-11-29 13:16:32.222973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.810 [2024-11-29 13:16:32.234794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.810 [2024-11-29 13:16:32.235262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.810 [2024-11-29 13:16:32.235275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.810 [2024-11-29 13:16:32.235281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.810 [2024-11-29 13:16:32.235431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.810 [2024-11-29 13:16:32.235580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.810 [2024-11-29 13:16:32.235590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.810 [2024-11-29 13:16:32.235596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.810 [2024-11-29 13:16:32.235600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.810 [2024-11-29 13:16:32.247428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.810 [2024-11-29 13:16:32.247869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.810 [2024-11-29 13:16:32.247881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.810 [2024-11-29 13:16:32.247887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.810 [2024-11-29 13:16:32.248037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.810 [2024-11-29 13:16:32.248191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.810 [2024-11-29 13:16:32.248197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.810 [2024-11-29 13:16:32.248203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.810 [2024-11-29 13:16:32.248207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.810 [2024-11-29 13:16:32.260019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.810 [2024-11-29 13:16:32.260562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.260592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.260601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.260766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.260919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.260925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.260931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.260937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.272619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.273216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.273246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.273255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.273424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.273576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.273583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.273589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.273602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.285292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.285855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.285885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.285894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.286060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.286220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.286227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.286232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.286239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.297910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.298480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.298510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.298518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.298684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.298837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.298843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.298849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.298855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.310540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.311069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.311099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.311108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.311280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.311434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.311441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.311447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.311452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.323276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.323837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.323871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.323879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.324045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.324205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.324213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.324218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.324224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.335897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.336485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.336515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.336524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.336689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.336842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.336848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.336854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.336860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.348545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.349114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.349144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.349153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.349325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.349479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.349485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.349490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.349496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.361172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.361749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.361779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.361788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.361961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.362114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.362120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.362126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.362133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.373814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.374430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.811 [2024-11-29 13:16:32.374460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.811 [2024-11-29 13:16:32.374469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.811 [2024-11-29 13:16:32.374637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.811 [2024-11-29 13:16:32.374790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.811 [2024-11-29 13:16:32.374797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.811 [2024-11-29 13:16:32.374803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.811 [2024-11-29 13:16:32.374809] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.811 [2024-11-29 13:16:32.386486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.811 [2024-11-29 13:16:32.386965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.812 [2024-11-29 13:16:32.386980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.812 [2024-11-29 13:16:32.386985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.812 [2024-11-29 13:16:32.387135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.812 [2024-11-29 13:16:32.387291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.812 [2024-11-29 13:16:32.387298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.812 [2024-11-29 13:16:32.387303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.812 [2024-11-29 13:16:32.387307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.812 [2024-11-29 13:16:32.399116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.812 [2024-11-29 13:16:32.399675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.812 [2024-11-29 13:16:32.399705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.812 [2024-11-29 13:16:32.399713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.812 [2024-11-29 13:16:32.399879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.812 [2024-11-29 13:16:32.400032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.812 [2024-11-29 13:16:32.400042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.812 [2024-11-29 13:16:32.400047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.812 [2024-11-29 13:16:32.400053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.812 [2024-11-29 13:16:32.411739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.812 [2024-11-29 13:16:32.412262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.812 [2024-11-29 13:16:32.412292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.812 [2024-11-29 13:16:32.412301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.812 [2024-11-29 13:16:32.412469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.812 [2024-11-29 13:16:32.412622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.812 [2024-11-29 13:16:32.412628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.812 [2024-11-29 13:16:32.412634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.812 [2024-11-29 13:16:32.412639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.812 [2024-11-29 13:16:32.424464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.812 [2024-11-29 13:16:32.424937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.812 [2024-11-29 13:16:32.424952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.812 [2024-11-29 13:16:32.424958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.812 [2024-11-29 13:16:32.425108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.812 [2024-11-29 13:16:32.425264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.812 [2024-11-29 13:16:32.425270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.812 [2024-11-29 13:16:32.425275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.812 [2024-11-29 13:16:32.425280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.812 5776.00 IOPS, 22.56 MiB/s [2024-11-29T12:16:32.492Z] [2024-11-29 13:16:32.438224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.812 [2024-11-29 13:16:32.438738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.812 [2024-11-29 13:16:32.438768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.812 [2024-11-29 13:16:32.438777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.812 [2024-11-29 13:16:32.438942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.812 [2024-11-29 13:16:32.439095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.812 [2024-11-29 13:16:32.439101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.812 [2024-11-29 13:16:32.439106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.812 [2024-11-29 13:16:32.439116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.812 [2024-11-29 13:16:32.450954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.812 [2024-11-29 13:16:32.451513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.812 [2024-11-29 13:16:32.451543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.812 [2024-11-29 13:16:32.451552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.812 [2024-11-29 13:16:32.451717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.812 [2024-11-29 13:16:32.451870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.812 [2024-11-29 13:16:32.451877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.812 [2024-11-29 13:16:32.451882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.812 [2024-11-29 13:16:32.451888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.812 [2024-11-29 13:16:32.463582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.812 [2024-11-29 13:16:32.464054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.812 [2024-11-29 13:16:32.464083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.812 [2024-11-29 13:16:32.464091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.812 [2024-11-29 13:16:32.464272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.812 [2024-11-29 13:16:32.464426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.812 [2024-11-29 13:16:32.464432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.812 [2024-11-29 13:16:32.464438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.812 [2024-11-29 13:16:32.464444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:29.812 [2024-11-29 13:16:32.476441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:29.812 [2024-11-29 13:16:32.477010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.812 [2024-11-29 13:16:32.477040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:29.812 [2024-11-29 13:16:32.477049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:29.812 [2024-11-29 13:16:32.477225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:29.812 [2024-11-29 13:16:32.477379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:29.812 [2024-11-29 13:16:32.477385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:29.812 [2024-11-29 13:16:32.477391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:29.812 [2024-11-29 13:16:32.477396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.074 [2024-11-29 13:16:32.489083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.489683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.489713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.489722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.489888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.490041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.490047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.490053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.075 [2024-11-29 13:16:32.490058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.075 [2024-11-29 13:16:32.501754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.502224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.502255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.502263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.502431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.502585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.502591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.502597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.075 [2024-11-29 13:16:32.502603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.075 [2024-11-29 13:16:32.514425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.514992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.515021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.515030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.515202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.515355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.515362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.515367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.075 [2024-11-29 13:16:32.515373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.075 [2024-11-29 13:16:32.527052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.527618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.527648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.527657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.527826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.527979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.527985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.527990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.075 [2024-11-29 13:16:32.527996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.075 [2024-11-29 13:16:32.539677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.540241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.540271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.540280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.540447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.540600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.540606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.540612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.075 [2024-11-29 13:16:32.540618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.075 [2024-11-29 13:16:32.552313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.552800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.552815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.552821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.552971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.553121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.553127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.553132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.075 [2024-11-29 13:16:32.553137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.075 [2024-11-29 13:16:32.564949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.565401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.565415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.565420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.565570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.565719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.565729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.565734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.075 [2024-11-29 13:16:32.565739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.075 [2024-11-29 13:16:32.577556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.578136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.578171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.578180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.578349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.578502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.578508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.578514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.075 [2024-11-29 13:16:32.578520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.075 [2024-11-29 13:16:32.590200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.590771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.590800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.590809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.590975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.591128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.591134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.591140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.075 [2024-11-29 13:16:32.591146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.075 [2024-11-29 13:16:32.602837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.075 [2024-11-29 13:16:32.603402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.075 [2024-11-29 13:16:32.603432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.075 [2024-11-29 13:16:32.603441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.075 [2024-11-29 13:16:32.603606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.075 [2024-11-29 13:16:32.603759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.075 [2024-11-29 13:16:32.603766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.075 [2024-11-29 13:16:32.603771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.603781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.615469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.616052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.616082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.616091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.616264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.616418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.616425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.616430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.616436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.628130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.628736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.628766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.628775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.628941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.629093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.629100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.629105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.629111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.640798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.641381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.641411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.641420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.641585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.641737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.641743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.641749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.641755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.653446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.654014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.654044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.654053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.654226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.654381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.654387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.654392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.654398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.666154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.666724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.666754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.666762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.666928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.667080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.667086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.667092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.667098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.678774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.679353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.679383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.679392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.679557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.679710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.679716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.679722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.679728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.691419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.692045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.692075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.692083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.692262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.692416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.692422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.692427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.692433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.704107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.704673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.704703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.704712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.704877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.705030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.705037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.705042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.705048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.716737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.717245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.717260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.717266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.717416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.717566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.717572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.717577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.076 [2024-11-29 13:16:32.717582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.076 [2024-11-29 13:16:32.729399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.076 [2024-11-29 13:16:32.729947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.076 [2024-11-29 13:16:32.729977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.076 [2024-11-29 13:16:32.729986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.076 [2024-11-29 13:16:32.730152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.076 [2024-11-29 13:16:32.730313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.076 [2024-11-29 13:16:32.730325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.076 [2024-11-29 13:16:32.730331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.077 [2024-11-29 13:16:32.730337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.077 [2024-11-29 13:16:32.742013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.077 [2024-11-29 13:16:32.742567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.077 [2024-11-29 13:16:32.742597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.077 [2024-11-29 13:16:32.742606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.077 [2024-11-29 13:16:32.742771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.077 [2024-11-29 13:16:32.742932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.077 [2024-11-29 13:16:32.742939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.077 [2024-11-29 13:16:32.742945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.077 [2024-11-29 13:16:32.742951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.340 [2024-11-29 13:16:32.754639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.340 [2024-11-29 13:16:32.755094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.340 [2024-11-29 13:16:32.755109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.340 [2024-11-29 13:16:32.755115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.340 [2024-11-29 13:16:32.755271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.340 [2024-11-29 13:16:32.755421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.340 [2024-11-29 13:16:32.755427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.340 [2024-11-29 13:16:32.755433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.340 [2024-11-29 13:16:32.755438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.340 [2024-11-29 13:16:32.767260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.340 [2024-11-29 13:16:32.767823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.340 [2024-11-29 13:16:32.767853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.340 [2024-11-29 13:16:32.767861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.340 [2024-11-29 13:16:32.768027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.340 [2024-11-29 13:16:32.768188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.340 [2024-11-29 13:16:32.768195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.340 [2024-11-29 13:16:32.768201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.340 [2024-11-29 13:16:32.768210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.340 [2024-11-29 13:16:32.779879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.340 [2024-11-29 13:16:32.780464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.340 [2024-11-29 13:16:32.780495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.340 [2024-11-29 13:16:32.780503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.340 [2024-11-29 13:16:32.780668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.340 [2024-11-29 13:16:32.780821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.340 [2024-11-29 13:16:32.780828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.340 [2024-11-29 13:16:32.780833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.340 [2024-11-29 13:16:32.780839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.340 [2024-11-29 13:16:32.792521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.340 [2024-11-29 13:16:32.793097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.340 [2024-11-29 13:16:32.793127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.340 [2024-11-29 13:16:32.793136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.340 [2024-11-29 13:16:32.793310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.340 [2024-11-29 13:16:32.793463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.340 [2024-11-29 13:16:32.793470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.340 [2024-11-29 13:16:32.793475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.340 [2024-11-29 13:16:32.793481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.340 [2024-11-29 13:16:32.805156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.340 [2024-11-29 13:16:32.805727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.340 [2024-11-29 13:16:32.805758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.340 [2024-11-29 13:16:32.805766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.340 [2024-11-29 13:16:32.805931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.340 [2024-11-29 13:16:32.806084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.340 [2024-11-29 13:16:32.806091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.340 [2024-11-29 13:16:32.806096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.340 [2024-11-29 13:16:32.806102] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.340 [2024-11-29 13:16:32.817794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.340 [2024-11-29 13:16:32.818383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.340 [2024-11-29 13:16:32.818413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.340 [2024-11-29 13:16:32.818422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.340 [2024-11-29 13:16:32.818587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.340 [2024-11-29 13:16:32.818740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.340 [2024-11-29 13:16:32.818746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.340 [2024-11-29 13:16:32.818752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.340 [2024-11-29 13:16:32.818758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.340 [2024-11-29 13:16:32.830439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.340 [2024-11-29 13:16:32.830938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.340 [2024-11-29 13:16:32.830952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.340 [2024-11-29 13:16:32.830958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.340 [2024-11-29 13:16:32.831109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.340 [2024-11-29 13:16:32.831264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.340 [2024-11-29 13:16:32.831271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.340 [2024-11-29 13:16:32.831276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.340 [2024-11-29 13:16:32.831281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.340 [2024-11-29 13:16:32.843096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.340 [2024-11-29 13:16:32.843546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.340 [2024-11-29 13:16:32.843560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.340 [2024-11-29 13:16:32.843565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.843715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.843865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.843870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.843875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.843880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.855692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.341 [2024-11-29 13:16:32.856204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.341 [2024-11-29 13:16:32.856217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.341 [2024-11-29 13:16:32.856222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.856376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.856526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.856531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.856536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.856541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.868355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.341 [2024-11-29 13:16:32.868911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.341 [2024-11-29 13:16:32.868941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.341 [2024-11-29 13:16:32.868950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.869117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.869278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.869286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.869291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.869297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.880978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.341 [2024-11-29 13:16:32.881546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.341 [2024-11-29 13:16:32.881576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.341 [2024-11-29 13:16:32.881585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.881750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.881903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.881909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.881915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.881921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.893607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.341 [2024-11-29 13:16:32.894218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.341 [2024-11-29 13:16:32.894248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.341 [2024-11-29 13:16:32.894257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.894425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.894579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.894592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.894597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.894603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.906286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.341 [2024-11-29 13:16:32.906862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.341 [2024-11-29 13:16:32.906892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.341 [2024-11-29 13:16:32.906900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.907066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.907227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.907234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.907239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.907245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.918921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.341 [2024-11-29 13:16:32.919475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.341 [2024-11-29 13:16:32.919505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.341 [2024-11-29 13:16:32.919514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.919680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.919833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.919840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.919845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.919851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.931571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.341 [2024-11-29 13:16:32.932143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.341 [2024-11-29 13:16:32.932179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.341 [2024-11-29 13:16:32.932187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.932352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.932505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.932512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.932517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.932527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.944216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.341 [2024-11-29 13:16:32.944766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.341 [2024-11-29 13:16:32.944796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.341 [2024-11-29 13:16:32.944805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.944970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.945123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.945129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.945135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.945141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.956819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.341 [2024-11-29 13:16:32.957314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.341 [2024-11-29 13:16:32.957329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.341 [2024-11-29 13:16:32.957335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.341 [2024-11-29 13:16:32.957485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.341 [2024-11-29 13:16:32.957635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.341 [2024-11-29 13:16:32.957640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.341 [2024-11-29 13:16:32.957645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.341 [2024-11-29 13:16:32.957650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.341 [2024-11-29 13:16:32.969459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.342 [2024-11-29 13:16:32.969939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.342 [2024-11-29 13:16:32.969952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.342 [2024-11-29 13:16:32.969957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.342 [2024-11-29 13:16:32.970106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.342 [2024-11-29 13:16:32.970262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.342 [2024-11-29 13:16:32.970268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.342 [2024-11-29 13:16:32.970273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.342 [2024-11-29 13:16:32.970277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.342 [2024-11-29 13:16:32.982092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.342 [2024-11-29 13:16:32.982685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.342 [2024-11-29 13:16:32.982716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.342 [2024-11-29 13:16:32.982724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.342 [2024-11-29 13:16:32.982890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.342 [2024-11-29 13:16:32.983043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.342 [2024-11-29 13:16:32.983049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.342 [2024-11-29 13:16:32.983054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.342 [2024-11-29 13:16:32.983060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.342 [2024-11-29 13:16:32.994740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.342 [2024-11-29 13:16:32.995310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.342 [2024-11-29 13:16:32.995340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.342 [2024-11-29 13:16:32.995348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.342 [2024-11-29 13:16:32.995514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.342 [2024-11-29 13:16:32.995667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.342 [2024-11-29 13:16:32.995673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.342 [2024-11-29 13:16:32.995679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.342 [2024-11-29 13:16:32.995684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.342 [2024-11-29 13:16:33.007365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.342 [2024-11-29 13:16:33.007857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.342 [2024-11-29 13:16:33.007872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.342 [2024-11-29 13:16:33.007877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.342 [2024-11-29 13:16:33.008027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.342 [2024-11-29 13:16:33.008183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.342 [2024-11-29 13:16:33.008189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.342 [2024-11-29 13:16:33.008194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.342 [2024-11-29 13:16:33.008199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.712 [2024-11-29 13:16:33.020027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.712 [2024-11-29 13:16:33.020488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.712 [2024-11-29 13:16:33.020501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.712 [2024-11-29 13:16:33.020507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.712 [2024-11-29 13:16:33.020660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.712 [2024-11-29 13:16:33.020810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.712 [2024-11-29 13:16:33.020816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.712 [2024-11-29 13:16:33.020821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.712 [2024-11-29 13:16:33.020826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.712 [2024-11-29 13:16:33.032652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.712 [2024-11-29 13:16:33.033220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.712 [2024-11-29 13:16:33.033250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.712 [2024-11-29 13:16:33.033259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.712 [2024-11-29 13:16:33.033424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.712 [2024-11-29 13:16:33.033577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.712 [2024-11-29 13:16:33.033584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.712 [2024-11-29 13:16:33.033589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.712 [2024-11-29 13:16:33.033595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.712 [2024-11-29 13:16:33.045286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.712 [2024-11-29 13:16:33.045865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.712 [2024-11-29 13:16:33.045896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.712 [2024-11-29 13:16:33.045904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.712 [2024-11-29 13:16:33.046070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.712 [2024-11-29 13:16:33.046231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.712 [2024-11-29 13:16:33.046238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.712 [2024-11-29 13:16:33.046244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.712 [2024-11-29 13:16:33.046249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.712 [2024-11-29 13:16:33.057920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.712 [2024-11-29 13:16:33.058496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.712 [2024-11-29 13:16:33.058527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.712 [2024-11-29 13:16:33.058535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.712 [2024-11-29 13:16:33.058703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.712 [2024-11-29 13:16:33.058856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.712 [2024-11-29 13:16:33.058866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.712 [2024-11-29 13:16:33.058871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.712 [2024-11-29 13:16:33.058877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1107911 Killed "${NVMF_APP[@]}" "$@" 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1109610 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1109610 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1109610 ']' 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.712 [2024-11-29 13:16:33.070565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.712 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:30.712 [2024-11-29 13:16:33.071056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.712 [2024-11-29 13:16:33.071072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.712 [2024-11-29 13:16:33.071077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.712 [2024-11-29 13:16:33.071233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.712 [2024-11-29 13:16:33.071383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.712 [2024-11-29 13:16:33.071389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.712 [2024-11-29 13:16:33.071395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.712 [2024-11-29 13:16:33.071401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.712 [2024-11-29 13:16:33.083234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.712 [2024-11-29 13:16:33.083686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.712 [2024-11-29 13:16:33.083699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.712 [2024-11-29 13:16:33.083704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.712 [2024-11-29 13:16:33.083854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.712 [2024-11-29 13:16:33.084007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.712 [2024-11-29 13:16:33.084013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.712 [2024-11-29 13:16:33.084018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.712 [2024-11-29 13:16:33.084023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.712 [2024-11-29 13:16:33.095858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.712 [2024-11-29 13:16:33.096461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.712 [2024-11-29 13:16:33.096491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.712 [2024-11-29 13:16:33.096500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.712 [2024-11-29 13:16:33.096665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.712 [2024-11-29 13:16:33.096819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.712 [2024-11-29 13:16:33.096825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.712 [2024-11-29 13:16:33.096831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.712 [2024-11-29 13:16:33.096836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.712 [2024-11-29 13:16:33.108516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.712 [2024-11-29 13:16:33.108998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.712 [2024-11-29 13:16:33.109012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.712 [2024-11-29 13:16:33.109018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.712 [2024-11-29 13:16:33.109173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.712 [2024-11-29 13:16:33.109324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.712 [2024-11-29 13:16:33.109329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.712 [2024-11-29 13:16:33.109335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.712 [2024-11-29 13:16:33.109340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.712 [2024-11-29 13:16:33.120641] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:32:30.713 [2024-11-29 13:16:33.120686] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.713 [2024-11-29 13:16:33.121165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.121645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.121658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.121664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.121814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.121973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.121979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.121985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.713 [2024-11-29 13:16:33.121990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.713 [2024-11-29 13:16:33.133827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.134260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.134290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.134300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.134468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.134622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.134628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.134634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.713 [2024-11-29 13:16:33.134640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.713 [2024-11-29 13:16:33.146487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.147077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.147107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.147115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.147289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.147442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.147449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.147454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.713 [2024-11-29 13:16:33.147460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.713 [2024-11-29 13:16:33.159147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.159690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.159706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.159712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.159863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.160013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.160018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.160028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.713 [2024-11-29 13:16:33.160033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.713 [2024-11-29 13:16:33.171852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.172286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.172299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.172305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.172455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.172606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.172612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.172617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.713 [2024-11-29 13:16:33.172622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.713 [2024-11-29 13:16:33.184454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.184774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.184787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.184793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.184944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.185094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.185100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.185105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.713 [2024-11-29 13:16:33.185111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.713 [2024-11-29 13:16:33.197087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.197623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.197653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.197662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.197828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.197981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.197988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.197993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.713 [2024-11-29 13:16:33.198000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.713 [2024-11-29 13:16:33.209691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.210220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.210250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.210259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.210428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.210581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.210588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.210594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.713 [2024-11-29 13:16:33.210600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.713 [2024-11-29 13:16:33.213119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:30.713 [2024-11-29 13:16:33.222296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.222884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.222915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.222924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.223090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.223259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.223267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.223272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.713 [2024-11-29 13:16:33.223278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.713 [2024-11-29 13:16:33.234971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.713 [2024-11-29 13:16:33.235510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.713 [2024-11-29 13:16:33.235540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.713 [2024-11-29 13:16:33.235549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.713 [2024-11-29 13:16:33.235715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.713 [2024-11-29 13:16:33.235868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.713 [2024-11-29 13:16:33.235874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.713 [2024-11-29 13:16:33.235880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.714 [2024-11-29 13:16:33.235886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.714 [2024-11-29 13:16:33.242427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.714 [2024-11-29 13:16:33.242448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.714 [2024-11-29 13:16:33.242458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.714 [2024-11-29 13:16:33.242465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.714 [2024-11-29 13:16:33.242469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.714 [2024-11-29 13:16:33.243602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:30.714 [2024-11-29 13:16:33.243752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.714 [2024-11-29 13:16:33.243754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:30.714 [2024-11-29 13:16:33.247592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.714 [2024-11-29 13:16:33.248206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.714 [2024-11-29 13:16:33.248237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.714 [2024-11-29 13:16:33.248245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.714 [2024-11-29 13:16:33.248415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.714 [2024-11-29 13:16:33.248568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.714 [2024-11-29 13:16:33.248574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.714 [2024-11-29 13:16:33.248580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.714 [2024-11-29 13:16:33.248586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.714 [2024-11-29 13:16:33.260300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.714 [2024-11-29 13:16:33.260880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.714 [2024-11-29 13:16:33.260912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.714 [2024-11-29 13:16:33.260921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.714 [2024-11-29 13:16:33.261088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.714 [2024-11-29 13:16:33.261249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.714 [2024-11-29 13:16:33.261256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.714 [2024-11-29 13:16:33.261262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.714 [2024-11-29 13:16:33.261268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.714 [2024-11-29 13:16:33.272969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.714 [2024-11-29 13:16:33.273399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.714 [2024-11-29 13:16:33.273431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.714 [2024-11-29 13:16:33.273440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.714 [2024-11-29 13:16:33.273606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.714 [2024-11-29 13:16:33.273760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.714 [2024-11-29 13:16:33.273767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.714 [2024-11-29 13:16:33.273777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.714 [2024-11-29 13:16:33.273783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.714 [2024-11-29 13:16:33.285626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.714 [2024-11-29 13:16:33.286104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.714 [2024-11-29 13:16:33.286119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.714 [2024-11-29 13:16:33.286125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.714 [2024-11-29 13:16:33.286282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.714 [2024-11-29 13:16:33.286432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.714 [2024-11-29 13:16:33.286438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.714 [2024-11-29 13:16:33.286443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.714 [2024-11-29 13:16:33.286448] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.714 [2024-11-29 13:16:33.298269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.714 [2024-11-29 13:16:33.298589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.714 [2024-11-29 13:16:33.298604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.714 [2024-11-29 13:16:33.298611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.714 [2024-11-29 13:16:33.298762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.714 [2024-11-29 13:16:33.298913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.714 [2024-11-29 13:16:33.298918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.714 [2024-11-29 13:16:33.298923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.714 [2024-11-29 13:16:33.298928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.714 [2024-11-29 13:16:33.310935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.714 [2024-11-29 13:16:33.311526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.714 [2024-11-29 13:16:33.311558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.714 [2024-11-29 13:16:33.311567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.714 [2024-11-29 13:16:33.311732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.714 [2024-11-29 13:16:33.311886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.714 [2024-11-29 13:16:33.311892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.714 [2024-11-29 13:16:33.311898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.714 [2024-11-29 13:16:33.311904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:30.714 [2024-11-29 13:16:33.323607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:30.714 [2024-11-29 13:16:33.324063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.714 [2024-11-29 13:16:33.324094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:30.714 [2024-11-29 13:16:33.324102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:30.714 [2024-11-29 13:16:33.324277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:30.714 [2024-11-29 13:16:33.324431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:30.714 [2024-11-29 13:16:33.324438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:30.714 [2024-11-29 13:16:33.324444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:30.714 [2024-11-29 13:16:33.324450] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.026 [2024-11-29 13:16:33.336276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.026 [2024-11-29 13:16:33.336740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.026 [2024-11-29 13:16:33.336755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.026 [2024-11-29 13:16:33.336761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.336912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.337062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.337068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.337073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.337078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 [2024-11-29 13:16:33.348915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.027 [2024-11-29 13:16:33.349421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.027 [2024-11-29 13:16:33.349434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.027 [2024-11-29 13:16:33.349440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.349590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.349740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.349746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.349751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.349756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 [2024-11-29 13:16:33.361576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.027 [2024-11-29 13:16:33.362128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.027 [2024-11-29 13:16:33.362168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.027 [2024-11-29 13:16:33.362177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.362345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.362499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.362505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.362510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.362516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 [2024-11-29 13:16:33.374226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.027 [2024-11-29 13:16:33.374735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.027 [2024-11-29 13:16:33.374750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.027 [2024-11-29 13:16:33.374756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.374906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.375056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.375062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.375068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.375073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 [2024-11-29 13:16:33.386895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.027 [2024-11-29 13:16:33.387500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.027 [2024-11-29 13:16:33.387531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.027 [2024-11-29 13:16:33.387540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.387709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.387861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.387868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.387874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.387880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 [2024-11-29 13:16:33.399567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.027 [2024-11-29 13:16:33.400060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.027 [2024-11-29 13:16:33.400075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.027 [2024-11-29 13:16:33.400081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.400239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.400390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.400396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.400401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.400406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 [2024-11-29 13:16:33.412238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.027 [2024-11-29 13:16:33.412646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.027 [2024-11-29 13:16:33.412676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.027 [2024-11-29 13:16:33.412684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.412850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.413003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.413010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.413015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.413021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 [2024-11-29 13:16:33.424859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.027 [2024-11-29 13:16:33.425308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.027 [2024-11-29 13:16:33.425324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.027 [2024-11-29 13:16:33.425329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.425480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.425630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.425636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.425641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.425646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 [2024-11-29 13:16:33.437473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.027 [2024-11-29 13:16:33.437818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.027 [2024-11-29 13:16:33.437831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.027 [2024-11-29 13:16:33.437837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.437987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.438136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.438142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.438151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.438156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 4813.33 IOPS, 18.80 MiB/s [2024-11-29T12:16:33.707Z] [2024-11-29 13:16:33.450131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.027 [2024-11-29 13:16:33.450599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.027 [2024-11-29 13:16:33.450613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.027 [2024-11-29 13:16:33.450618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.027 [2024-11-29 13:16:33.450768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.027 [2024-11-29 13:16:33.450917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.027 [2024-11-29 13:16:33.450923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.027 [2024-11-29 13:16:33.450928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.027 [2024-11-29 13:16:33.450933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.027 [2024-11-29 13:16:33.462751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.463207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.463227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.463233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.463388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.463539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.463545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.463550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.463555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.028 [2024-11-29 13:16:33.475386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.475733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.475746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.475752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.475901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.476051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.476057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.476062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.476067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.028 [2024-11-29 13:16:33.488057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.488529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.488543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.488549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.488698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.488849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.488854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.488859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.488864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.028 [2024-11-29 13:16:33.500697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.501139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.501151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.501156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.501310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.501460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.501467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.501471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.501476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.028 [2024-11-29 13:16:33.513307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.513747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.513760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.513765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.513914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.514064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.514069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.514074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.514079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.028 [2024-11-29 13:16:33.525910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.526457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.526494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.526502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.526668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.526821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.526827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.526833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.526839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.028 [2024-11-29 13:16:33.538529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.539017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.539033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.539038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.539193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.539344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.539350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.539355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.539360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.028 [2024-11-29 13:16:33.551194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.551608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.551638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.551647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.551813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.551966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.551972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.551977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.551983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.028 [2024-11-29 13:16:33.563812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.564392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.564423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.564432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.564601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.564754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.564761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.564766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.564772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.028 [2024-11-29 13:16:33.576462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.028 [2024-11-29 13:16:33.577028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.028 [2024-11-29 13:16:33.577059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.028 [2024-11-29 13:16:33.577067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.028 [2024-11-29 13:16:33.577240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.028 [2024-11-29 13:16:33.577393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.028 [2024-11-29 13:16:33.577399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.028 [2024-11-29 13:16:33.577405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.028 [2024-11-29 13:16:33.577411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.029 [2024-11-29 13:16:33.589093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.029 [2024-11-29 13:16:33.589578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.029 [2024-11-29 13:16:33.589593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.029 [2024-11-29 13:16:33.589599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.029 [2024-11-29 13:16:33.589749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.029 [2024-11-29 13:16:33.589899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.029 [2024-11-29 13:16:33.589905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.029 [2024-11-29 13:16:33.589910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.029 [2024-11-29 13:16:33.589914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.029 [2024-11-29 13:16:33.601740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.029 [2024-11-29 13:16:33.602179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.029 [2024-11-29 13:16:33.602194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.029 [2024-11-29 13:16:33.602199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.029 [2024-11-29 13:16:33.602349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.029 [2024-11-29 13:16:33.602499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.029 [2024-11-29 13:16:33.602509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.029 [2024-11-29 13:16:33.602514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.029 [2024-11-29 13:16:33.602519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.029 [2024-11-29 13:16:33.614342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.029 [2024-11-29 13:16:33.614791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.029 [2024-11-29 13:16:33.614804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.029 [2024-11-29 13:16:33.614809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.029 [2024-11-29 13:16:33.614959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.029 [2024-11-29 13:16:33.615109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.029 [2024-11-29 13:16:33.615114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.029 [2024-11-29 13:16:33.615120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.029 [2024-11-29 13:16:33.615124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.029 [2024-11-29 13:16:33.626952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.029 [2024-11-29 13:16:33.627581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.029 [2024-11-29 13:16:33.627612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.029 [2024-11-29 13:16:33.627621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.029 [2024-11-29 13:16:33.627789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.029 [2024-11-29 13:16:33.627942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.029 [2024-11-29 13:16:33.627949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.029 [2024-11-29 13:16:33.627955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.029 [2024-11-29 13:16:33.627961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.029 [2024-11-29 13:16:33.639656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.029 [2024-11-29 13:16:33.640228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.029 [2024-11-29 13:16:33.640259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.029 [2024-11-29 13:16:33.640268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.029 [2024-11-29 13:16:33.640436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.029 [2024-11-29 13:16:33.640589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.029 [2024-11-29 13:16:33.640596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.029 [2024-11-29 13:16:33.640601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.029 [2024-11-29 13:16:33.640607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.029 [2024-11-29 13:16:33.652310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.029 [2024-11-29 13:16:33.652820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.029 [2024-11-29 13:16:33.652836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.029 [2024-11-29 13:16:33.652841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.029 [2024-11-29 13:16:33.652991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.029 [2024-11-29 13:16:33.653141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.029 [2024-11-29 13:16:33.653147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.029 [2024-11-29 13:16:33.653152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.029 [2024-11-29 13:16:33.653157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.029 [2024-11-29 13:16:33.664980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.029 [2024-11-29 13:16:33.665440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.029 [2024-11-29 13:16:33.665453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.029 [2024-11-29 13:16:33.665458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.029 [2024-11-29 13:16:33.665608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.029 [2024-11-29 13:16:33.665757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.029 [2024-11-29 13:16:33.665763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.029 [2024-11-29 13:16:33.665768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.029 [2024-11-29 13:16:33.665772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.029 [2024-11-29 13:16:33.677589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.029 [2024-11-29 13:16:33.678074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.029 [2024-11-29 13:16:33.678086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.029 [2024-11-29 13:16:33.678091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.029 [2024-11-29 13:16:33.678245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.029 [2024-11-29 13:16:33.678395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.029 [2024-11-29 13:16:33.678401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.029 [2024-11-29 13:16:33.678406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.029 [2024-11-29 13:16:33.678411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.029 [2024-11-29 13:16:33.690374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.029 [2024-11-29 13:16:33.690977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.029 [2024-11-29 13:16:33.691012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.029 [2024-11-29 13:16:33.691021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.029 [2024-11-29 13:16:33.691194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.029 [2024-11-29 13:16:33.691347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.029 [2024-11-29 13:16:33.691354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.029 [2024-11-29 13:16:33.691360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.029 [2024-11-29 13:16:33.691365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.300 [2024-11-29 13:16:33.703042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.300 [2024-11-29 13:16:33.703621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.300 [2024-11-29 13:16:33.703651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.300 [2024-11-29 13:16:33.703660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.300 [2024-11-29 13:16:33.703826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.300 [2024-11-29 13:16:33.703980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.300 [2024-11-29 13:16:33.703986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.300 [2024-11-29 13:16:33.703991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.300 [2024-11-29 13:16:33.703997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.300 [2024-11-29 13:16:33.715692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.300 [2024-11-29 13:16:33.716173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.300 [2024-11-29 13:16:33.716204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.300 [2024-11-29 13:16:33.716213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.300 [2024-11-29 13:16:33.716381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.300 [2024-11-29 13:16:33.716534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.300 [2024-11-29 13:16:33.716541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.300 [2024-11-29 13:16:33.716546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.300 [2024-11-29 13:16:33.716552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.300 [2024-11-29 13:16:33.728405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.300 [2024-11-29 13:16:33.728999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.300 [2024-11-29 13:16:33.729029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.300 [2024-11-29 13:16:33.729038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.300 [2024-11-29 13:16:33.729215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.300 [2024-11-29 13:16:33.729369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.300 [2024-11-29 13:16:33.729375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.300 [2024-11-29 13:16:33.729380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.300 [2024-11-29 13:16:33.729386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.300 [2024-11-29 13:16:33.741089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.300 [2024-11-29 13:16:33.741637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.300 [2024-11-29 13:16:33.741668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.300 [2024-11-29 13:16:33.741676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.300 [2024-11-29 13:16:33.741843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.300 [2024-11-29 13:16:33.741996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.300 [2024-11-29 13:16:33.742002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.300 [2024-11-29 13:16:33.742007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.300 [2024-11-29 13:16:33.742013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.300 [2024-11-29 13:16:33.753715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.300 [2024-11-29 13:16:33.754231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.300 [2024-11-29 13:16:33.754247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.300 [2024-11-29 13:16:33.754252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.300 [2024-11-29 13:16:33.754403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.300 [2024-11-29 13:16:33.754553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.300 [2024-11-29 13:16:33.754559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.300 [2024-11-29 13:16:33.754564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.300 [2024-11-29 13:16:33.754568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.300 [2024-11-29 13:16:33.766400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.300 [2024-11-29 13:16:33.766841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.300 [2024-11-29 13:16:33.766854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.300 [2024-11-29 13:16:33.766859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.300 [2024-11-29 13:16:33.767009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.300 [2024-11-29 13:16:33.767163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.300 [2024-11-29 13:16:33.767173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.300 [2024-11-29 13:16:33.767178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.300 [2024-11-29 13:16:33.767182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.300 [2024-11-29 13:16:33.779001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.300 [2024-11-29 13:16:33.779455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.300 [2024-11-29 13:16:33.779468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.300 [2024-11-29 13:16:33.779473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.300 [2024-11-29 13:16:33.779622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.300 [2024-11-29 13:16:33.779772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.300 [2024-11-29 13:16:33.779778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.300 [2024-11-29 13:16:33.779783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.300 [2024-11-29 13:16:33.779787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.300 [2024-11-29 13:16:33.791608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.300 [2024-11-29 13:16:33.791948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.300 [2024-11-29 13:16:33.791960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.300 [2024-11-29 13:16:33.791965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.300 [2024-11-29 13:16:33.792114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.792268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.792274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.792280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.792284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 [2024-11-29 13:16:33.804252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.301 [2024-11-29 13:16:33.804589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.301 [2024-11-29 13:16:33.804602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.301 [2024-11-29 13:16:33.804607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.301 [2024-11-29 13:16:33.804756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.804906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.804912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.804918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.804923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 [2024-11-29 13:16:33.816904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.301 [2024-11-29 13:16:33.817466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.301 [2024-11-29 13:16:33.817497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.301 [2024-11-29 13:16:33.817506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.301 [2024-11-29 13:16:33.817671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.817825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.817831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.817837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.817843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 [2024-11-29 13:16:33.829549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.301 [2024-11-29 13:16:33.830053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.301 [2024-11-29 13:16:33.830068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.301 [2024-11-29 13:16:33.830074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.301 [2024-11-29 13:16:33.830230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.830381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.830387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.830393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.830398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 [2024-11-29 13:16:33.842229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.301 [2024-11-29 13:16:33.842770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.301 [2024-11-29 13:16:33.842800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.301 [2024-11-29 13:16:33.842809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.301 [2024-11-29 13:16:33.842974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.843128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.843134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.843141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.843147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 [2024-11-29 13:16:33.854854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.301 [2024-11-29 13:16:33.855490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.301 [2024-11-29 13:16:33.855524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.301 [2024-11-29 13:16:33.855533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.301 [2024-11-29 13:16:33.855700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.855853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.855859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.855865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.855871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 [2024-11-29 13:16:33.867565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.301 [2024-11-29 13:16:33.868024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.301 [2024-11-29 13:16:33.868039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.301 [2024-11-29 13:16:33.868045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.301 [2024-11-29 13:16:33.868199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.868351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.868357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.868362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.868368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 [2024-11-29 13:16:33.880193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.301 [2024-11-29 13:16:33.880653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.301 [2024-11-29 13:16:33.880666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.301 [2024-11-29 13:16:33.880672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.301 [2024-11-29 13:16:33.880821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.880971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.880977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.880982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.880987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 [2024-11-29 13:16:33.892811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.301 [2024-11-29 13:16:33.893453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.301 [2024-11-29 13:16:33.893484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.301 [2024-11-29 13:16:33.893492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.301 [2024-11-29 13:16:33.893662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.893816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.893823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.893828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.893834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 [2024-11-29 13:16:33.905524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.301 [2024-11-29 13:16:33.905874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.301 [2024-11-29 13:16:33.905888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.301 [2024-11-29 13:16:33.905894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.301 [2024-11-29 13:16:33.906044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.301 [2024-11-29 13:16:33.906199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.301 [2024-11-29 13:16:33.906205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.301 [2024-11-29 13:16:33.906210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.301 [2024-11-29 13:16:33.906215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.301 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.301 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:32:31.301 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.301 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.301 [2024-11-29 13:16:33.918187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.302 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:31.302 [2024-11-29 13:16:33.918698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.302 [2024-11-29 13:16:33.918710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.302 [2024-11-29 13:16:33.918715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.302 [2024-11-29 13:16:33.918865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.302 [2024-11-29 13:16:33.919014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.302 [2024-11-29 13:16:33.919021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.302 [2024-11-29 13:16:33.919026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.302 [2024-11-29 13:16:33.919030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.302 [2024-11-29 13:16:33.930870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.302 [2024-11-29 13:16:33.931525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.302 [2024-11-29 13:16:33.931555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.302 [2024-11-29 13:16:33.931568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.302 [2024-11-29 13:16:33.931735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.302 [2024-11-29 13:16:33.931889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.302 [2024-11-29 13:16:33.931895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.302 [2024-11-29 13:16:33.931901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.302 [2024-11-29 13:16:33.931908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.302 [2024-11-29 13:16:33.943605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.302 [2024-11-29 13:16:33.944261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.302 [2024-11-29 13:16:33.944292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.302 [2024-11-29 13:16:33.944301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.302 [2024-11-29 13:16:33.944469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.302 [2024-11-29 13:16:33.944622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.302 [2024-11-29 13:16:33.944628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.302 [2024-11-29 13:16:33.944634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.302 [2024-11-29 13:16:33.944640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.302 [2024-11-29 13:16:33.956204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.302 [2024-11-29 13:16:33.956661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.302 [2024-11-29 13:16:33.956676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.302 [2024-11-29 13:16:33.956682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.302 [2024-11-29 13:16:33.956832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.302 [2024-11-29 13:16:33.956982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.302 [2024-11-29 13:16:33.956988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.302 [2024-11-29 13:16:33.956993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.302 [2024-11-29 13:16:33.956998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.302 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.302 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:31.302 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.302 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:31.302 [2024-11-29 13:16:33.964442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.302 [2024-11-29 13:16:33.968827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.302 [2024-11-29 13:16:33.969408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.302 [2024-11-29 13:16:33.969438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.302 [2024-11-29 13:16:33.969446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.302 [2024-11-29 13:16:33.969613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.302 [2024-11-29 13:16:33.969766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.302 [2024-11-29 13:16:33.969772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.302 [2024-11-29 13:16:33.969777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.302 [2024-11-29 13:16:33.969783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.302 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.302 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:31.302 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.302 13:16:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:31.563 [2024-11-29 13:16:33.981469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.563 [2024-11-29 13:16:33.981980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.563 [2024-11-29 13:16:33.981995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.563 [2024-11-29 13:16:33.982000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.563 [2024-11-29 13:16:33.982150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.563 [2024-11-29 13:16:33.982305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.563 [2024-11-29 13:16:33.982311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.563 [2024-11-29 13:16:33.982317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.563 [2024-11-29 13:16:33.982322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.563 [2024-11-29 13:16:33.994151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.563 [2024-11-29 13:16:33.994651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.563 [2024-11-29 13:16:33.994664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.563 [2024-11-29 13:16:33.994670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.563 [2024-11-29 13:16:33.994819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.563 [2024-11-29 13:16:33.994969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.563 [2024-11-29 13:16:33.994975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.563 [2024-11-29 13:16:33.994980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.563 [2024-11-29 13:16:33.994984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.563 Malloc0 00:32:31.563 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.563 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:31.563 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.563 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:31.563 [2024-11-29 13:16:34.006805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.563 [2024-11-29 13:16:34.007279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.563 [2024-11-29 13:16:34.007293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.563 [2024-11-29 13:16:34.007298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.563 [2024-11-29 13:16:34.007448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.563 [2024-11-29 13:16:34.007597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.563 [2024-11-29 13:16:34.007603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.564 [2024-11-29 13:16:34.007608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.564 [2024-11-29 13:16:34.007613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.564 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.564 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:31.564 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.564 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:31.564 [2024-11-29 13:16:34.019496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.564 [2024-11-29 13:16:34.019716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.564 [2024-11-29 13:16:34.019728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.564 [2024-11-29 13:16:34.019734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.564 [2024-11-29 13:16:34.019883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.564 [2024-11-29 13:16:34.020033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.564 [2024-11-29 13:16:34.020039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.564 [2024-11-29 13:16:34.020044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.564 [2024-11-29 13:16:34.020049] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.564 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.564 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.564 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.564 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:31.564 [2024-11-29 13:16:34.032163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.564 [2024-11-29 13:16:34.032613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.564 [2024-11-29 13:16:34.032626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b010 with addr=10.0.0.2, port=4420 00:32:31.564 [2024-11-29 13:16:34.032635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b010 is same with the state(6) to be set 00:32:31.565 [2024-11-29 13:16:34.032785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b010 (9): Bad file descriptor 00:32:31.565 [2024-11-29 13:16:34.032934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:31.565 [2024-11-29 13:16:34.032940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:31.565 [2024-11-29 13:16:34.032945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:31.565 [2024-11-29 13:16:34.032940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.565 [2024-11-29 13:16:34.032950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:31.565 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.565 13:16:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1108597 00:32:31.565 [2024-11-29 13:16:34.044780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:31.565 [2024-11-29 13:16:34.071954] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:32:33.206 4787.14 IOPS, 18.70 MiB/s [2024-11-29T12:16:36.456Z] 5800.00 IOPS, 22.66 MiB/s [2024-11-29T12:16:37.841Z] 6587.89 IOPS, 25.73 MiB/s [2024-11-29T12:16:38.780Z] 7220.40 IOPS, 28.20 MiB/s [2024-11-29T12:16:39.721Z] 7721.27 IOPS, 30.16 MiB/s [2024-11-29T12:16:40.660Z] 8138.33 IOPS, 31.79 MiB/s [2024-11-29T12:16:41.599Z] 8497.77 IOPS, 33.19 MiB/s [2024-11-29T12:16:42.538Z] 8799.50 IOPS, 34.37 MiB/s [2024-11-29T12:16:42.538Z] 9072.87 IOPS, 35.44 MiB/s 00:32:39.858 Latency(us) 00:32:39.858 [2024-11-29T12:16:42.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.858 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:39.858 Verification LBA range: start 0x0 length 0x4000 00:32:39.858 Nvme1n1 : 15.01 9075.37 35.45 13464.69 0.00 5660.48 733.87 22063.79 00:32:39.858 [2024-11-29T12:16:42.538Z] =================================================================================================================== 00:32:39.858 [2024-11-29T12:16:42.538Z] Total : 9075.37 35.45 13464.69 0.00 5660.48 733.87 22063.79 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:40.119 rmmod nvme_tcp 00:32:40.119 rmmod nvme_fabrics 00:32:40.119 rmmod nvme_keyring 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1109610 ']' 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1109610 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1109610 ']' 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1109610 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1109610 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1109610' 00:32:40.119 killing process with pid 1109610 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1109610 00:32:40.119 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1109610 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.381 13:16:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.298 13:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:42.298 00:32:42.298 real 0m28.350s 00:32:42.298 user 1m3.543s 00:32:42.298 sys 0m7.716s 00:32:42.298 13:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:42.298 13:16:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:42.298 ************************************ 00:32:42.298 END TEST nvmf_bdevperf 00:32:42.298 ************************************ 00:32:42.298 13:16:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:42.298 13:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:42.298 13:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:42.298 13:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.570 ************************************ 00:32:42.570 START TEST nvmf_target_disconnect 00:32:42.570 ************************************ 00:32:42.570 13:16:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:42.570 * Looking for test storage... 00:32:42.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:42.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.570 --rc genhtml_branch_coverage=1 00:32:42.570 --rc genhtml_function_coverage=1 00:32:42.570 --rc genhtml_legend=1 00:32:42.570 --rc geninfo_all_blocks=1 00:32:42.570 --rc geninfo_unexecuted_blocks=1 00:32:42.570 00:32:42.570 ' 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:42.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.570 --rc genhtml_branch_coverage=1 00:32:42.570 --rc genhtml_function_coverage=1 00:32:42.570 --rc genhtml_legend=1 00:32:42.570 --rc geninfo_all_blocks=1 00:32:42.570 --rc geninfo_unexecuted_blocks=1 00:32:42.570 00:32:42.570 ' 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:42.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.570 --rc genhtml_branch_coverage=1 00:32:42.570 --rc genhtml_function_coverage=1 00:32:42.570 --rc genhtml_legend=1 00:32:42.570 --rc geninfo_all_blocks=1 00:32:42.570 --rc geninfo_unexecuted_blocks=1 00:32:42.570 00:32:42.570 ' 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:42.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.570 --rc genhtml_branch_coverage=1 00:32:42.570 --rc genhtml_function_coverage=1 00:32:42.570 --rc genhtml_legend=1 00:32:42.570 --rc geninfo_all_blocks=1 00:32:42.570 --rc geninfo_unexecuted_blocks=1 00:32:42.570 00:32:42.570 ' 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:42.570 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:42.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:42.571 13:16:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:50.723 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.723 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.723 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.723 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.723 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.723 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.723 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.723 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:50.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:50.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:50.724 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:50.724 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.724 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:32:50.725 00:32:50.725 --- 10.0.0.2 ping statistics --- 00:32:50.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.725 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:32:50.725 00:32:50.725 --- 10.0.0.1 ping statistics --- 00:32:50.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.725 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:50.725 ************************************ 00:32:50.725 START TEST nvmf_target_disconnect_tc1 00:32:50.725 ************************************ 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:50.725 [2024-11-29 13:16:52.872266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.725 [2024-11-29 13:16:52.872330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8bfae0 with addr=10.0.0.2, port=4420 00:32:50.725 [2024-11-29 13:16:52.872358] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:50.725 [2024-11-29 13:16:52.872372] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:50.725 [2024-11-29 13:16:52.872379] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:50.725 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:50.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:50.725 Initializing NVMe Controllers 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:50.725 00:32:50.725 real 0m0.136s 00:32:50.725 user 0m0.063s 00:32:50.725 sys 0m0.074s 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:50.725 ************************************ 00:32:50.725 END TEST nvmf_target_disconnect_tc1 00:32:50.725 ************************************ 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:50.725 ************************************ 00:32:50.725 START TEST nvmf_target_disconnect_tc2 00:32:50.725 ************************************ 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1115709 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1115709 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1115709 ']' 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.725 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.726 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.726 13:16:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:50.726 [2024-11-29 13:16:53.020111] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:32:50.726 [2024-11-29 13:16:53.020162] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.726 [2024-11-29 13:16:53.113899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.726 [2024-11-29 13:16:53.150597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.726 [2024-11-29 13:16:53.150628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.726 [2024-11-29 13:16:53.150639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.726 [2024-11-29 13:16:53.150645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.726 [2024-11-29 13:16:53.150651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.726 [2024-11-29 13:16:53.152416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:50.726 [2024-11-29 13:16:53.152537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:50.726 [2024-11-29 13:16:53.152684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:50.726 [2024-11-29 13:16:53.152685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:51.295 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.295 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:51.295 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:51.295 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.295 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:51.296 Malloc0 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:51.296 [2024-11-29 13:16:53.903549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:51.296 [2024-11-29 13:16:53.943850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1116037 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:51.296 13:16:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:53.868 13:16:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1115709 00:32:53.868 13:16:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Write completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Write completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Write completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Write completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Write completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.868 Read completed with error (sct=0, sc=8) 00:32:53.868 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 [2024-11-29 13:16:55.977892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Read completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 Write completed with error (sct=0, sc=8) 00:32:53.869 starting I/O failed 00:32:53.869 [2024-11-29 13:16:55.978229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.869 [2024-11-29 13:16:55.978595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.978644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.978911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.978927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.979118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.979129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.979569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.979606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.979966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.979979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.980208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.980219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.980633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.980648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.980986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.980996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.981396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.981434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.981662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.981675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.981842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.981854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.982065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.982076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.982268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.982280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.982562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.982573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.982912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.982923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.983083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.983093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.983259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.983270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.983582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.983592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.983900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.983910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.984208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.984218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.984435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.984446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.984664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.984674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.869 qpair failed and we were unable to recover it. 00:32:53.869 [2024-11-29 13:16:55.984960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.869 [2024-11-29 13:16:55.984970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.985187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.985198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.985468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.985478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.985779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.985789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.986124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.986134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.986331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.986342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.986626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.986636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.986798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.986808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.987121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.987131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.987458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.987469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.987653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.987663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.988006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.988016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.988359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.988370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.988671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.988681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.988875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.988886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.989188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.989198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.989483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.989494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.989663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.989675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.989948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.989958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.990387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.990397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.990701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.990711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.991087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.991096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.991317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.991327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.991625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.991635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.991928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.991941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.992237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.992247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.992637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.992647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.992918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.992928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.993228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.993238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.993438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.993448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.993755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.993765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.994138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.994148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.994513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.994524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.994830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.994840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.995124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.995134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.995481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.995491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.995824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.995834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.996152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.870 [2024-11-29 13:16:55.996166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.870 qpair failed and we were unable to recover it. 00:32:53.870 [2024-11-29 13:16:55.996486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.996497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.996774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.996784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.997101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.997111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.997423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.997434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.997699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.997709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.998005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.998014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.998217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.998227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.998585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.998594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.998898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.998908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.999303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.999314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.999608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.999618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:55.999945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:55.999955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.000239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.000249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.000559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.000569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.000858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.000868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.001184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.001194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.001399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.001409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.001718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.001727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.002043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.002053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.002339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.002349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.002672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.002684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.003022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.003034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.003346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.003359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.003667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.003679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.004026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.004038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.004386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.004399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.004593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.004610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.004926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.004938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.005210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.005223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.005438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.005449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.005722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.005735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.006034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.006046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.006391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.006404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.006726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.006739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.007049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.007061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.007407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.007420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.007726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.007738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.871 [2024-11-29 13:16:56.007903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.871 [2024-11-29 13:16:56.007914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.871 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.008272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.008285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.008577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.008589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.008886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.008898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.009114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.009126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.009462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.009475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.009652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.009664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.009951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.009963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.010164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.010177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.010469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.010481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.010798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.010811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.011109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.011122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.011417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.011430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.011767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.011780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.012117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.012130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.012479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.012492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.012829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.012842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.013140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.013153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.013479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.013492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.013783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.013796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.014099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.014112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.014450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.014470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.014780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.014797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.015086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.015103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.015453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.015471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.015779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.015796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.016107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.016124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.016449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.016467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.016778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.016796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.017077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.017099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.017401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.017420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.017737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.017754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.018053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.018070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.018167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.018186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.018393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.018409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.018727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.018743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.019048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.019064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.019272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.019289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.872 qpair failed and we were unable to recover it. 00:32:53.872 [2024-11-29 13:16:56.019614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.872 [2024-11-29 13:16:56.019630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.019962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.019979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.020302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.020319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.020662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.020678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.020989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.021005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.021338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.021356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.021716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.021733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.022041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.022058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.022372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.022390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.022676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.022692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.022881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.022899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.023209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.023227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.023537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.023554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.023859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.023875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.024200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.024218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.024504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.024521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.024864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.024880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.025192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.025209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.025433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.025453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.025789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.025806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.026124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.026140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.026534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.026551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.026851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.026867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.027064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.027080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.027391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.027409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.027703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.027720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.028037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.028053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.028397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.873 [2024-11-29 13:16:56.028421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.873 qpair failed and we were unable to recover it. 00:32:53.873 [2024-11-29 13:16:56.028770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.028791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.029001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.029021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.029321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.029349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.029536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.029560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.029911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.029933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.030261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.030282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.030595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.030617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.030975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.030996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.031205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.031227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.031600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.031622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.031928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.031948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.032256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.032278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.032613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.032635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.032886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.032907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.033227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.033249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.033583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.033605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.033920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.033941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.034262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.034285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.034505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.034528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.034882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.034903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.035216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.035246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.035557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.035577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.035917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.035937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.036274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.036296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.036536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.036557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.036882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.036903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.037223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.037246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.037558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.037579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.037889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.037909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.038221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.038244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.038590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.038620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.038952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.038973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.039181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.039212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.039578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.039606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.039816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.039847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.040185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.040216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.040570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.040598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.874 [2024-11-29 13:16:56.040925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.874 [2024-11-29 13:16:56.040953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.874 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.041295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.041325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.041569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.041597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.041988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.042016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.042368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.042398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.042811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.042840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.043057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.043085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.043450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.043481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.043825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.043854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.044185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.044216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.044561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.044589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.044949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.044977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.045329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.045359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.045713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.045741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.046069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.046098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.046434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.046465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.046816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.046844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.047177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.047208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.047549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.047577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.047892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.047920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.048259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.048290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.048564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.048593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.048944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.048973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.049327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.049357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.049729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.049757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.050074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.050102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.050460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.050491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.050816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.050846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.051114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.051144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.051580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.051609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.051830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.051857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.052108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.052136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.052490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.052519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.052864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.052899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.053233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.875 [2024-11-29 13:16:56.053265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.875 qpair failed and we were unable to recover it. 00:32:53.875 [2024-11-29 13:16:56.053636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.053665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.053999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.054027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.054336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.054366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.054697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.054726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.055079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.055107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.055461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.055491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.055838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.055868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.056219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.056249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.056589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.056618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.056820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.056848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.057213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.057244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.057612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.057641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.057991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.058020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.058362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.058393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.058639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.058667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.059004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.059033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.059378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.059409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.059763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.059793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.060021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.060049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.060399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.060429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.060777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.060807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.061133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.061173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.061519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.061548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.061904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.061933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.062289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.062319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.062671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.062700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.063053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.063083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.063236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.063266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.063643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.063672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.064006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.064035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.064457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.064487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.064821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.064850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.065207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.065237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.065571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.065599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.065953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.065982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.066328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.066360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.066698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.066728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.067095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.876 [2024-11-29 13:16:56.067124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.876 qpair failed and we were unable to recover it. 00:32:53.876 [2024-11-29 13:16:56.067457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.067494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.067820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.067850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.068221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.068251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.068571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.068600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.068934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.068963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.069298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.069328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.069669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.069698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.070042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.070071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.070413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.070444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.070783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.070812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.071050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.071079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.071406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.071437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.071774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.071804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.072049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.072078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.072423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.072455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.072809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.072839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.073180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.073210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.073635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.073664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.074003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.074032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.074377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.074408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.074753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.074782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.075110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.075139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.075468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.075498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.075858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.075886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.076269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.076299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.076712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.076741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.077083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.077111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.077348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.077382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.077782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.077813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.078169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.078199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.078540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.078568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.078922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.078951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.079304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.079334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.079666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.079696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.080039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.080068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.080404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.080434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.080763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.080791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.081143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.877 [2024-11-29 13:16:56.081180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.877 qpair failed and we were unable to recover it. 00:32:53.877 [2024-11-29 13:16:56.081505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.081534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.081775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.081804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.082136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.082181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.082547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.082576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.082929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.082958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.083319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.083350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.083704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.083739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.084075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.084104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.084393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.084425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.084778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.084806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.085171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.085202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.085559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.085588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.085935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.085963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.086317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.086347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.086698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.086727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.087057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.087086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.087446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.087477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.087830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.087859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.088196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.088225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.088551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.088579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.088928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.088957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.089318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.089350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.089673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.089703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.090080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.090109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.090445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.090475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.090822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.090852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.091233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.091263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.091624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.091653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.091984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.092014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.092375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.092406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.092770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.092799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.093126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.093154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.093494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.878 [2024-11-29 13:16:56.093523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.878 qpair failed and we were unable to recover it. 00:32:53.878 [2024-11-29 13:16:56.093873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.093902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.094305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.094335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.094750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.094779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.095154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.095209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.095538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.095567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.095860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.095889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.096246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.096277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.096641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.096673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.097051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.097080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.097317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.097354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.097737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.097767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.098114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.098143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.098499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.098529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.098883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.098913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.099269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.099299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.099649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.099678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.100024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.100054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.100289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.100324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.100682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.100711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.101057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.101087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.101434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.101465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.101696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.101724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.102069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.102099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.102538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.102569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.102888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.102918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.103269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.103301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.103653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.103682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.104017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.104046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.104298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.104330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.104647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.104676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.104994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.105023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.105259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.105288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.105632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.105661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.106012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.106041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.106277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.106309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.106668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.106697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.107046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.107075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.107305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.879 [2024-11-29 13:16:56.107334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.879 qpair failed and we were unable to recover it. 00:32:53.879 [2024-11-29 13:16:56.107682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.107710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.108059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.108088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.108325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.108355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.108509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.108537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.108880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.108909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.109251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.109281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.109687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.109716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.110038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.110067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.110318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.110348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.110715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.110743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.111070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.111099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.111452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.111493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.111854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.111882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.112208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.112239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.112587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.112616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.112970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.112999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.113335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.113365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.113720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.113748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.114082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.114111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.114447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.114478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.114837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.114864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.115209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.115239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.115585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.115614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.115859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.115891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.116141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.116184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.116494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.116525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.116862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.116891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.117248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.117279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.117661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.117689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.118034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.118062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.118394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.118424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.118779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.118808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.119207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.119236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.119586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.119615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.119949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.119977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.120325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.120355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.120712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.120748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.121088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.880 [2024-11-29 13:16:56.121116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.880 qpair failed and we were unable to recover it. 00:32:53.880 [2024-11-29 13:16:56.121482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.121513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.121838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.121866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.122097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.122129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.122382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.122415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.122748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.122777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.123021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.123049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.123397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.123428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.123749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.123777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.124018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.124046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.124394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.124424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.124742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.124770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.125104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.125132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.125523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.125552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.125872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.125908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.126248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.126278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.126641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.126670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.126946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.126973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.127306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.127336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.127702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.127731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.128068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.128097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.128430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.128460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.128830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.128859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.129196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.129226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.129477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.129508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.129827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.129857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.130195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.130225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.130573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.130602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.130958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.130987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.131336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.131367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.131693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.131722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.132056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.132085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.132320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.132349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.132688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.132716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.133068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.133097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.133475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.133505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.133860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.133889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.134242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.134272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.134597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.881 [2024-11-29 13:16:56.134626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.881 qpair failed and we were unable to recover it. 00:32:53.881 [2024-11-29 13:16:56.134967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.134995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.135339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.135369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.135727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.135756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.136107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.136136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.136489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.136518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.136848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.136877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.137238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.137267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.137607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.137635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.137969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.137998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.138311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.138341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.138728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.138756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.139077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.139106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.139466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.139496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.139833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.139861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.140204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.140234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.140452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.140490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.140842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.140871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.141230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.141261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.141696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.141724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.142061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.142089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.142439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.142468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.142829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.142857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.143217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.143246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.143596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.143623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.143962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.143991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.144337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.144368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.144721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.144749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.145105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.145133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.145382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.145413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.145760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.145789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.146150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.146190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.146589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.146617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.146905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.146934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.147273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.882 [2024-11-29 13:16:56.147303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.882 qpair failed and we were unable to recover it. 00:32:53.882 [2024-11-29 13:16:56.147668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.147697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.148034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.148062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.148404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.148434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.148785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.148816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.149181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.149212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.149549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.149577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.149959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.149987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.150320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.150350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.150705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.150735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.151084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.151110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.151472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.151502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.151858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.151886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.152222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.152253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.152592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.152621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.152971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.152999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.153359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.153390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.153704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.153733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.154085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.154113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.154352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.154381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.154763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.154792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.155118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.155147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.155516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.155552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.155790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.155819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.156170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.156201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.156546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.156576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.156933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.156962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.157314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.157345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.157706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.157735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.158074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.158103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.158458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.158489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.158819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.158848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.159191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.159221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.159555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.159591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.159947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.159976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.160328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.160358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.160707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.160735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.161096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.161125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.161472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.883 [2024-11-29 13:16:56.161502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.883 qpair failed and we were unable to recover it. 00:32:53.883 [2024-11-29 13:16:56.161832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.161861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.162190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.162221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.162572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.162601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.162949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.162977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.163330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.163360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.163703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.163731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.164096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.164125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.164493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.164523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.164873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.164901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.165258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.165289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.165682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.165711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.166053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.166082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.166490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.166522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.166760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.166787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.167119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.167149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.167548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.167578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.167914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.167943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.168293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.168323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.168671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.168699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.169107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.169135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.169475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.169504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.169853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.169881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.170176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.170210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.170561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.170598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.170939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.170968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.171315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.171345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.171692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.171721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.172055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.172085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.172426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.172456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.172799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.172828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.173173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.173204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.173629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.173657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.173994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.174023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.174377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.174408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.174754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.174783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.175133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.175173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.175522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.175550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.175841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.884 [2024-11-29 13:16:56.175870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.884 qpair failed and we were unable to recover it. 00:32:53.884 [2024-11-29 13:16:56.176215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.176246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.176589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.176618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.176961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.176989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.177337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.177367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.177712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.177749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.178093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.178122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.178456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.178485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.178801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.178830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.179180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.179210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.179559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.179587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.179925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.179954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.180297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.180328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.180568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.180600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.180939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.180968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.181216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.181245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.181569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.181597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.181951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.181980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.182327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.182357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.182634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.182663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.183027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.183056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.183460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.183492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.183821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.183851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.184206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.184236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.184600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.184628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.184981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.185010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.185335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.185377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.185709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.185737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.186071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.186099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.186447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.186477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.186847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.186882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.187246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.187276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.187523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.187551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.187897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.187925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.188269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.188299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.188660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.188689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.189043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.189071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.189415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.189444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.189815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.885 [2024-11-29 13:16:56.189844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.885 qpair failed and we were unable to recover it. 00:32:53.885 [2024-11-29 13:16:56.190200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.190230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.190532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.190562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.190884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.190912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.191314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.191344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.191725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.191754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.192106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.192135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.192495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.192524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.192878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.192906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.193258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.193288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.193631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.193659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.194007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.194035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.194465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.194495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.194864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.194892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.195242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.195273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.195707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.195736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.196096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.196124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.196486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.196518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.196861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.196889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.197228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.197259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.197610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.197639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.197989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.198018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.198308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.198338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.198724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.198752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.199097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.199125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.199418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.199449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.199808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.199836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.200186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.200216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.200565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.200599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.200949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.200977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.201274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.201304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.201664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.201693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.202045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.202075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.202416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.202446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.202799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.202827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.203182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.203213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.203546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.203575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.203965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.203993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.204327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.886 [2024-11-29 13:16:56.204356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.886 qpair failed and we were unable to recover it. 00:32:53.886 [2024-11-29 13:16:56.204723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.204752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.205113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.205141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.205480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.205511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.205857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.205886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.206244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.206275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.206529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.206922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.206952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.207294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.207324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.207679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.207716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.208062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.208091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.208451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.208481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.208833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.208863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.209222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.209252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.209583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.209612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.209966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.209995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.210351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.210381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.210732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.210766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.211015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.211045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.211461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.211491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.211820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.211849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.212197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.212226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.212577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.212606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.212938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.212967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.213311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.213342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.213666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.213695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.214040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.214068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.214357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.214686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.214714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.215066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.215095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.215441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.215479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.215822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.215851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.216204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.216235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.216599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.887 [2024-11-29 13:16:56.216627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.887 qpair failed and we were unable to recover it. 00:32:53.887 [2024-11-29 13:16:56.216975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.217003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.217350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.217381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.217798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.217827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.218174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.218205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.218521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.218552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.218897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.218926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.219275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.219306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.219667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.219695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.220040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.220068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.220405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.220436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.220786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.220816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.221153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.221191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.221417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.221449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.221798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.221827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.222181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.222211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.222542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.222571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.222926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.222954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.223310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.223341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.223684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.223712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.224067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.224096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.224473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.224503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.224852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.224881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.225235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.225265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.225635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.225670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.226012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.226041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.226399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.226429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.226780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.226809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.227156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.227196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.227548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.227577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.227944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.227973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.228330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.228361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.228720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.228749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.229111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.229139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.229549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.229579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.229954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.229983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.230237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.230270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.230631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.230660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.888 [2024-11-29 13:16:56.231016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.888 [2024-11-29 13:16:56.231045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.888 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.231396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.231426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.231796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.231825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.232157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.232200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.232472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.232501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.232860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.232888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.233247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.233277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.233599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.233628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.233964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.233993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.234342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.234372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.234744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.234773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.235177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.235207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.235551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.235580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.235814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.235847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.236225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.236256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.236621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.236650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.237004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.237033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.237289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.237320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.237661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.237689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.238057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.238086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.238422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.238453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.238694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.238723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.239155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.239194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.239583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.239612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.239948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.239979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.240334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.240364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.240717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.240753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.241101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.241131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.241496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.241527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.241895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.241924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.242260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.242291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.242653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.242682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.242919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.242947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.243311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.243340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.243688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.243717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.889 [2024-11-29 13:16:56.244071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.889 [2024-11-29 13:16:56.244101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.889 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.244354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.244384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.244753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.244781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.245194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.245225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.245617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.245653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.245975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.246004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.246358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.246389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.246743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.246771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.247209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.247239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.247590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.247619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.248025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.248053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.248387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.248416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.248772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.248803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.249174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.249205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.249619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.249648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.250064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.250093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.250428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.250459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.250798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.250828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.251166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.251197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.251556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.251585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.251937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.251965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.252212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.252242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.252592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.252621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.252984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.253013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.253376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.253406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.253768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.253797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.254058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.254086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.254413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.254443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.254677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.254705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.255065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.255093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.255425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.255455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.255721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.255756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.256081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.256109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.256363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.256393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.256745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.256774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.890 [2024-11-29 13:16:56.257117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.890 [2024-11-29 13:16:56.257145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.890 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.257397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.257429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.257865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.257895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.258332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.258363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.258724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.258754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.259136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.259177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.259535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.259564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.259906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.259935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.260292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.260321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.260661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.260691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.260924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.260952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.261308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.261338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.261712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.261741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.261962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.261989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.262331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.262362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.262751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.262780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.263129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.263157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.263405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.263435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.263805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.263834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.264191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.264221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.264585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.264614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.264951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.264981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.265311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.265340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.265695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.265724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.266071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.266101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.266494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.266525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.266904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.266933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.267283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.267312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.267681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.267710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.267922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.267950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.268196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.268226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.268425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.268454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.268815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.268844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.269060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.269092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.269457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.269488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.269851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.269881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.270236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.270273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.270621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.270650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.891 qpair failed and we were unable to recover it. 00:32:53.891 [2024-11-29 13:16:56.271106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.891 [2024-11-29 13:16:56.271135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.271512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.271542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.271879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.271908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.272154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.272200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.272568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.272599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.272946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.272975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.273324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.273355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.273715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.273745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.273987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.274014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.274387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.274417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.274775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.274804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.275064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.275092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.275438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.275469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.275819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.275848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.276213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.276243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.276476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.276508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.276850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.276880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.277231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.277263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.277616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.277646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.278025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.278055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.278402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.278432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.278807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.278836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.279089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.279118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.279507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.279537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.279883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.279912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.280292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.280323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.280664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.280696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.280950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.280978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.892 qpair failed and we were unable to recover it. 00:32:53.892 [2024-11-29 13:16:56.281313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.892 [2024-11-29 13:16:56.281345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.281723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.281755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.282121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.282211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.282577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.282607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.282974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.283004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.283357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.283388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.283666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.283695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.284056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.284085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.284433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.284462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.284879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.284907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.285345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.285383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.285730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.285759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.286058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.286088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.286457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.286488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.286851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.286880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.287225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.287255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.287651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.287680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.288052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.288080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.288507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.288537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.288864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.288893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.289127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.289172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.289527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.289557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.289912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.289940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.290287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.290318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.290573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.290601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.291024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.291053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.291319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.291349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.291705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.291734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.292066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.292095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.292462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.292492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.292855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.292884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.293239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.293269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.293641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.293671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.294040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.294069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.294418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.294449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.294798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.294826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.295216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.295246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.295497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.295526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.893 qpair failed and we were unable to recover it. 00:32:53.893 [2024-11-29 13:16:56.295869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.893 [2024-11-29 13:16:56.295899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.296272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.296303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.296683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.296712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.296996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.297025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.297263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.297293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.297552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.297580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.297943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.297971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.298336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.298367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.298721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.298749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.298965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.298998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.299345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.299377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.299641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.299672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.300008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.300043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.300432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.300463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.300837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.300866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.301239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.301268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.301650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.301680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.302032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.302061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.302417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.302447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.302813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.302841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.303267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.303297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.303541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.303569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.303967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.303996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.304344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.304375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.304737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.304767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.304999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.305029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.305381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.305411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.305713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.305742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.306119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.306147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.306519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.306547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.306792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.306821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.307182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.307213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.307534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.307564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.307935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.894 [2024-11-29 13:16:56.307963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.894 qpair failed and we were unable to recover it. 00:32:53.894 [2024-11-29 13:16:56.308326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.308355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.308727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.308755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.308986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.309015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.309424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.309454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.309801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.309830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.310190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.310221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.310438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.310468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.310821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.310849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.311212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.311242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.311681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.311709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.312048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.312077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.312343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.312373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.312741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.312769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.313143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.313181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.313526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.313556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.313905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.313934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.314290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.314320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.314684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.314712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.315063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.315098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.315466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.315496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.315859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.315888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.316225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.316256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.316618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.316646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.316982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.317012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.317355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.317385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.317745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.317774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.318134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.318185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.318534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.318562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.318919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.318947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.319292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.319323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.319655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.319683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.320047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.320075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.320425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.320457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.320819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.320848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.321195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.321225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.895 qpair failed and we were unable to recover it. 00:32:53.895 [2024-11-29 13:16:56.321576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.895 [2024-11-29 13:16:56.321617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.321960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.321989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.322422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.322452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.322795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.322824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.323184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.323215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.323568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.323597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.323928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.323956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.324288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.324318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.324673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.324701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.325044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.325074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.325412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.325443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.325808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.325837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.326198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.326229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.326594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.326623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.326978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.327007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.327371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.327401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.327661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.327694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.328057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.328086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.328446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.328476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.328833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.328862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.329194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.329224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.329568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.329596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.330008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.330036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.330405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.330444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.330682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.330712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.331055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.331084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.331459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.331490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.331808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.331845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.332198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.332227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.332599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.332628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.332970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.333000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.333388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.333418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.333788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.333817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.334175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.334207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.896 qpair failed and we were unable to recover it. 00:32:53.896 [2024-11-29 13:16:56.334567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.896 [2024-11-29 13:16:56.334595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.334957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.334985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.335305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.335336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.335704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.335733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.336065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.336095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.336457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.336488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.336847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.336875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.337310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.337340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.337664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.337693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.338059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.338088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.338447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.338478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.338845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.338872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.339130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.339170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.339580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.339610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.339979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.340008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.340260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.340294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.340648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.340678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.341034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.341063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.341407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.341438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.341795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.341824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.342262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.342292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.342667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.342697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.343075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.343103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.343445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.343475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.343642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.343674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.344075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.344104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.344463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.344493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.344840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.344870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.345241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.345272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.345621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.345657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.346038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.346066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.897 [2024-11-29 13:16:56.346425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.897 [2024-11-29 13:16:56.346455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.897 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.346834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.346862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.347203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.347233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.347583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.347612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.347977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.348005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.348335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.348364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.348737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.348765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.349136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.349176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.349520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.349549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.349906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.349934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.350295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.350326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.350681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.350710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.351064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.351093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.351436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.351467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.351823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.351852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.352211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.352241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.352586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.352616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.352860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.352889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.353258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.353287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.353649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.353684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.354058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.354087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.354465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.354496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.354847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.354876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.355237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.355269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.355614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.355643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.356008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.356037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.356402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.356433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.356827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.356855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.357215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.357245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.357617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.357646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.358001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.358030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.358368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.358399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.358794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.358824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.359257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.359287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.359612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.359641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.359970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.359999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.360247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.360280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.360645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.360673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.898 qpair failed and we were unable to recover it. 00:32:53.898 [2024-11-29 13:16:56.361045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-11-29 13:16:56.361089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.361470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.361500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.361867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.361896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.362279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.362309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.362686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.362716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.363002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.363031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.363381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.363412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.363775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.363804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.364182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.364212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.364569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.364599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.365012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.365040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.365424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.365454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.365797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.365826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.366189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.366221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.366598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.366628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.366883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.366915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.367257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.367289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.367546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.367577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.367957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.367985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.368342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.368373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.368732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.368760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.369126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.369155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.369572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.369601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.369968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.369995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.370363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.370394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.370768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.370797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.371156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.371197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.371604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.371635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.371988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.372017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.372397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.372427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.372666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.372699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.373064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.373093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.373537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.373567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.373919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.373948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.374380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.374410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.374851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.374880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.375250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.375281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.375631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.375659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.899 qpair failed and we were unable to recover it. 00:32:53.899 [2024-11-29 13:16:56.376020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-11-29 13:16:56.376049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.376414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.376443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.376802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.376837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.377204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.377234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.377606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.377634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.378004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.378032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.378387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.378418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.378790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.378818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.379085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.379114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.379489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.379518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.379863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.379893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.380142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.380180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.380574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.380603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.380960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.380988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.381246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.381279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.381629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.381658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.382026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.382055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.382413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.382447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.382777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.382805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.383177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.383206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.383558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.383588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.383955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.383983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.384343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.384372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.384728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.384756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.385133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.385170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.385557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.385586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.385920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.385948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.386322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.386353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.386689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.386718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.387079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.387108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.387475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.387505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.387866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.387895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.388253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.388283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.388656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.388684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.389051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.389079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.389454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.389485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.389823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.389852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.390218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-11-29 13:16:56.390249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.900 qpair failed and we were unable to recover it. 00:32:53.900 [2024-11-29 13:16:56.390616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.390645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.391015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.391044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.391411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.391441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.391803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.391832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.392191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.392228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.392595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.392623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.392965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.392994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.393254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.393283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.393647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.393677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.394049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.394078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.394446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.394477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.394829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.394859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.395216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.395246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.395596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.395625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.395989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.396018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.396389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.396419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.396779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.396807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.397213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.397242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.397591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.397620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.397983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.398012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.398283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.398313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.398662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.398690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.399057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.399085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.399444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.399474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.399831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.399860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.400225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.400255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.400627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.400656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.401023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.401052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.401434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.401463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.401826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.401854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.402214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.402244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.402613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.402648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.403004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-11-29 13:16:56.403034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.901 qpair failed and we were unable to recover it. 00:32:53.901 [2024-11-29 13:16:56.403388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.403418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.403784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.403813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.404174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.404204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.404525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.404554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.404926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.404954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.405302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.405333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.405659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.405689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.406053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.406082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.406441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.406474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.406822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.406850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.407209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.407239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.407593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.407621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.407987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.408016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.408404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.408434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.408793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.408822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.409184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.409214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.409580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.409609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.409957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.409987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.410352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.410382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.410745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.410774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.411182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.411212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.411612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.411641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.412042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.412071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.412409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.412439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.412793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.412821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.413187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.413218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.413576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.413605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.413973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.414001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.414333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.414364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.414720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.414749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.415110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.415139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.415594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.415623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.415992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.416021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.416386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.416418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.416786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.416814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.417182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.417212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.417637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.417666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.902 [2024-11-29 13:16:56.418027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.902 [2024-11-29 13:16:56.418056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.902 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.418417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.418454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.418812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.418842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.419200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.419230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.419583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.419611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.419964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.419994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.420367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.420397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.420771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.420800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.421225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.421255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.421614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.421644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.422007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.422035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.422469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.422499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.422844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.422873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.423232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.423262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.423625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.423655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.424005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.424035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.424407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.424437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.424814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.424842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.425177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.425208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.425588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.425619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.425974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.426005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.426366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.426397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.426767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.426795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.427171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.427203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.427579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.427609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.427970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.427999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.428405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.428435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.428798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.428828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.429191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.429221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.429611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.429640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.429997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.430027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.430278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.430309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.430687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.430716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.431070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.431100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.431472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.431502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.431861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.431890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.432264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.432294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.432650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.903 [2024-11-29 13:16:56.432679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.903 qpair failed and we were unable to recover it. 00:32:53.903 [2024-11-29 13:16:56.433031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.433061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.433401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.433431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.433795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.433824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.434187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.434223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.434587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.434616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.434988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.435018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.435424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.435454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.435813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.435842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.436202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.436232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.436597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.436626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.436971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.437002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.437367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.437396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.437824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.437852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.438210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.438242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.438624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.438653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.439015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.439044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.439389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.439421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.439789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.439818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.440194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.440225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.440572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.440611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.440970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.440999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.441394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.441424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.441865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.441894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.442265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.442297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.442630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.442659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.443023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.443053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.443312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.443346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.443722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.443752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.444124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.444154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.444509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.444539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.444897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.444927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.445291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.445321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.445682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.445711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.446088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.446116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.446503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.446533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.446823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.446851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.447213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.447244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.447607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.904 [2024-11-29 13:16:56.447636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.904 qpair failed and we were unable to recover it. 00:32:53.904 [2024-11-29 13:16:56.448066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.448094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.448448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.448479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.448838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.448866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.449231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.449261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.449622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.449650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.450021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.450056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.450409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.450439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.450802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.450831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.451200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.451231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.451583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.451611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.451978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.452006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.452383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.452413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.452765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.452794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.453141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.453181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.453505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.453535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.453903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.453931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.454304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.454335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.454710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.454739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.455036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.455064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.455474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.455507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.455850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.455879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.456304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.456335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.456664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.456693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.457063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.457091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.457465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.457495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.457867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.457897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.458241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.458271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.458497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.458530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.458898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.458930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.459291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.459321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.459569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.459602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.459980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.460009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.460403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.460436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.460793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.460823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.461185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.461217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.461576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.461604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.461971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.461999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.462351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.905 [2024-11-29 13:16:56.462381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.905 qpair failed and we were unable to recover it. 00:32:53.905 [2024-11-29 13:16:56.462769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.462798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.463172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.463204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.463549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.463579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.463941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.463970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.464323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.464354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.464716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.464745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.465104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.465133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.465530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.465566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.465987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.466016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.466371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.466402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.466763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.466791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.467136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.467176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.467543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.467572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.467917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.467947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.468307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.468338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.468647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.468677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.469059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.469088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.469456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.469487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.469846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.469875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.470195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.470225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.470543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.470580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.470933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.470965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.471319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.471350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.471714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.471742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.472121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.472151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.472559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.472589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.472948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.472976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.473326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.473365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.473693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.473722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.473975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.474006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.474347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.474377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.474751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.474782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.475177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.906 [2024-11-29 13:16:56.475211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.906 qpair failed and we were unable to recover it. 00:32:53.906 [2024-11-29 13:16:56.475567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.475595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.475929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.475958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.476323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.476354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.476709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.476738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.477093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.477124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.477507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.477538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.477895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.477925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.478290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.478321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.478570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.478599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.478945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.478974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.479228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.479258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.479644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.479674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.480060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.480089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.480324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.480357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.480732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.480768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.481130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.481169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.481554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.481585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.481964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.481993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.482351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.482383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.482743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.482772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.483139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.483179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.483523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.483554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.483906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.483934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.484289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.484321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.484699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.484728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.485090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.485119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.485544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.485574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.485938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.485967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.486341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.486373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.486743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.486771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.487126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.487155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.487562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.487591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.487976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.488005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.488323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.488352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.488627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.488656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.489015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.489045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.489400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.489433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.489839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.907 [2024-11-29 13:16:56.489867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.907 qpair failed and we were unable to recover it. 00:32:53.907 [2024-11-29 13:16:56.490102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.490134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.490521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.490552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.490960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.490991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.491348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.491380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.491634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.491663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.492009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.492039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.492391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.492421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.492771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.492801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.493181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.493212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.493612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.493641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.493997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.494027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.494395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.494425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.494789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.494819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.495180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.495210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.495440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.495474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.495920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.495949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.496206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.496243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.496647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.496676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.496957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.496985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.497343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.497373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.497735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.497765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.498147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.498191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.498540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.498570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.498830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.498862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.499229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.499260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.499629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.499658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.499805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.499836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.500206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.500238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.500622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.500651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.501033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.501061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.501331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.501362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.501585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.501615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.501975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.502005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.502340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.502370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.502776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.502805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.503187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.503218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.503580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.503609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.503965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.503995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.908 qpair failed and we were unable to recover it. 00:32:53.908 [2024-11-29 13:16:56.504380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.908 [2024-11-29 13:16:56.504411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.504834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.504863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.505225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.505255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.505607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.505637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.505893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.505921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.506286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.506316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.506647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.506677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.506932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.506962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.507334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.507365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.507731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.507760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.508135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.508176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.508550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.508586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.508955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.508984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.509224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.509255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.509674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.509703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.510087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.510116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.510548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.510581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.510948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.510979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.511319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.511358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.511801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.511830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.512188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.512219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.512583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.512613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.513011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.513040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.513429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.513468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.513833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.513862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.514227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.514257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.514505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.514538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.514870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.514899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.515143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.515187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.515462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.515494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.515866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.515899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.516267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.516297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.516661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.516691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.516934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.516962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.517330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.517362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.517717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.517747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.518116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.518145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.518524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.909 [2024-11-29 13:16:56.518554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.909 qpair failed and we were unable to recover it. 00:32:53.909 [2024-11-29 13:16:56.518813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.518841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.519196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.519227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.519577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.519608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.519966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.520006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.520364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.520395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.520764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.520795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.521181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.521212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.521461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.521491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.521847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.521876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.522235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.522267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:53.910 [2024-11-29 13:16:56.522625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.910 [2024-11-29 13:16:56.522656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:53.910 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.522987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.523019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.523402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.523434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.523786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.523815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.524290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.524322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.524683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.524713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.525003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.525033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.525369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.525399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.525775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.525806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.526179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.526210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.526552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.526589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.526926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.526955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.527324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.527355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.527595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.527627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.527857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.185 [2024-11-29 13:16:56.527886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.185 qpair failed and we were unable to recover it. 00:32:54.185 [2024-11-29 13:16:56.528127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.528173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.528565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.528599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.528946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.528982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.529426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.529457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.529824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.529852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.530217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.530247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.530638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.530669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.531028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.531057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.531305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.531335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.531704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.531733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.532094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.532123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.532502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.532534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.532743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.532772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.532887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.532914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.533277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.533309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.533710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.533742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.534110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.534139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.534527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.534558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.534930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.534961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.535333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.535364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.535618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.535646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.536012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.536040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.536439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.536471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.536707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.536737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.537120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.537148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.537547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.537577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.537833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.537862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.538244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.186 [2024-11-29 13:16:56.538275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.186 qpair failed and we were unable to recover it. 00:32:54.186 [2024-11-29 13:16:56.538636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.538666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.539010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.539038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.539407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.539438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.539801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.539830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.540190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.540222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.540568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.540598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.540976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.541005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.541428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.541465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.541814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.541845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.542212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.542243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.542604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.542632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.543002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.543032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.543487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.543520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.543868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.543897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.544241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.544271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.544650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.544680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.544931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.544960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.545320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.545354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.545720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.545749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.546110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.546139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.546511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.546542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.546899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.546929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.547278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.547309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.547669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.547697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.548061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.548090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.548437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.548468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.548847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.187 [2024-11-29 13:16:56.548877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.187 qpair failed and we were unable to recover it. 00:32:54.187 [2024-11-29 13:16:56.549241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.549271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.549633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.549662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.550024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.550054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.550405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.550437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.550789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.550820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.551188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.551219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.551593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.551623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.551991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.552021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.552348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.552379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.552735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.552765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.553178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.553208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.553570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.553600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.553963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.553993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.554358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.554387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.554806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.554835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.555151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.555194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.555567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.555605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.555940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.555968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.556313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.556345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.556778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.556807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.557154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.557213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.557574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.557604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.557967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.557995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.558352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.558383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.558765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.558794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.559147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.559186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.559544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.559574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.559858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.559891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.560253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.188 [2024-11-29 13:16:56.560283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.188 qpair failed and we were unable to recover it. 00:32:54.188 [2024-11-29 13:16:56.560623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.560652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.561008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.561037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.561396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.561427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.561776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.561805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.562193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.562224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.562581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.562610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.562968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.562996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.563346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.563376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.563724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.563754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.564140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.564183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.564563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.564592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.564915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.564943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.565283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.565314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.565573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.565601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.565998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.566027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.566423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.566453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.566820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.566849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.567238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.567269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.567634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.567664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.568074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.568103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.568520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.568550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.568786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.568818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.569201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.569233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.569593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.569621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.569988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.570017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.570253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.570285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.570653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.570682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.570940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.570968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.571301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.571332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.189 qpair failed and we were unable to recover it. 00:32:54.189 [2024-11-29 13:16:56.571736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.189 [2024-11-29 13:16:56.571765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.572114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.572144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.572491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.572527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.572894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.572923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.573268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.573298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.573649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.573678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.574016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.574045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.574351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.574381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.574730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.574759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.575101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.575130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00cc000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Read completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 Write completed with error (sct=0, sc=8) 00:32:54.190 starting I/O failed 00:32:54.190 [2024-11-29 13:16:56.576014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:54.190 [2024-11-29 13:16:56.576646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.576770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.577425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.577532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.577942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.577980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.578480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.578588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.579045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.579082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.579517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.579626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.190 [2024-11-29 13:16:56.580097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.190 [2024-11-29 13:16:56.580135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.190 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.580545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.580577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.580926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.580956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.581317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.581350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.581718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.581748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.582102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.582132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.582470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.582523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.582948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.582978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.583312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.583345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.583708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.583739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.584138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.584178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.584541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.584571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.584926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.584954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.585324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.585354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.585717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.585746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.586104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.586134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.586515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.586545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.586791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.586825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.587081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.587110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.587483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.587513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.587869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.587899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.588270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.588302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.588510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.588538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.191 [2024-11-29 13:16:56.588905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.191 [2024-11-29 13:16:56.588934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.191 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.589280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.589312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.589576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.589604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.589945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.589976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.590339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.590370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.590744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.590774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.591135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.591176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.591538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.591568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.591935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.591965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.592327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.592359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.592599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.592631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.592996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.593025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.593389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.593420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.593789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.593819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.594225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.594256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.594655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.594685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.595033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.595063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.595397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.595429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.595790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.595819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.596186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.596217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.596580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.596609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.596853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.596882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.597235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.597266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.597628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.597664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.598033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.598061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.598398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.598429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.598684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.598713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.599072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.599101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.599472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.599502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.599752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.599784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.192 qpair failed and we were unable to recover it. 00:32:54.192 [2024-11-29 13:16:56.600175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.192 [2024-11-29 13:16:56.600206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.600608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.600637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.600989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.601018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.601391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.601423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.601758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.601787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.602149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.602192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.602617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.602646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.603088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.603119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.603563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.603595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.603851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.603884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.604238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.604270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.604523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.604552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.604914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.604943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.605306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.605337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.605712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.605741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.606103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.606132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.606512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.606542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.606912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.606941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.607281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.607311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.607659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.607688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.608054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.608083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.610622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.610694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.611112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.611148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.611555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.611585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.611955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.611983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.612353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.612387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.612748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.612777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.613124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.613155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.193 qpair failed and we were unable to recover it. 00:32:54.193 [2024-11-29 13:16:56.613554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.193 [2024-11-29 13:16:56.613584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.613925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.613956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.614314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.614345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.614698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.614729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.615084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.615114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.615484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.615526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.615890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.615920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.616180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.616211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.616566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.616595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.616954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.616982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.617352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.617383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.617738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.617767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.618143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.618184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.618547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.618576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.618938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.618967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.619339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.619369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.619739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.619768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.620133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.620175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.620502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.620531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.620792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.620821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.621183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.621214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.621606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.621635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.621987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.622017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.622392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.622423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.622777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.622806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.623179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.623209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.623600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.623630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.623997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.624026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.194 qpair failed and we were unable to recover it. 00:32:54.194 [2024-11-29 13:16:56.624401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.194 [2024-11-29 13:16:56.624433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.624796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.624826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.625198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.625228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.625582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.625610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.625951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.625980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.626335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.626367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.626731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.626760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.627127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.627157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.627628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.627659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.627987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.628017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.628391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.628423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.628793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.628824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.629274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.629305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.629684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.629713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.630066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.630095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.630447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.630477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.630827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.630856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.631217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.631254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.631601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.631630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.631995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.632024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.632388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.632418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.632778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.632807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.633172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.633203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.633608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.633639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.634000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.634030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.634390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.634421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.634837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.634865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.635220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.635251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.635623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.635652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.636024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.636053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.636396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.636426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.636785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.636815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.637186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.637216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.637637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.637665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.637992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.638022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.638338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.638368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.195 qpair failed and we were unable to recover it. 00:32:54.195 [2024-11-29 13:16:56.638743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.195 [2024-11-29 13:16:56.638772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.639151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.639192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.639537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.639567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.639931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.639960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.640294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.640325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.640691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.640720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.640970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.641002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.641351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.641381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.641636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.641665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.641895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.641925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.642367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.642398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.642757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.642786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.643153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.643195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.643548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.643577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.643844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.643873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.644208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.644239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.644487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.644516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.644869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.644900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.645250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.645281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.645732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.645761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.646126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.646154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.646528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.646565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.646938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.646967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.647331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.647361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.647739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.647768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.648009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.648038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.648312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.648342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.648635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.648665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.649032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.649062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.649405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.649436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.649799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.649829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.650265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.650295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.650722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.650751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.651118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.651148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.651560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.651591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.196 [2024-11-29 13:16:56.651872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.196 [2024-11-29 13:16:56.651900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.196 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.652247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.652278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.652514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.652544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.652797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.652827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.653089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.653118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.653479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.653511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.653872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.653902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.654273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.654304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.654547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.654580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.654829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.654858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.655246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.655277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.655628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.655661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.656016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.656046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.656403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.656434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.656807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.656838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.657201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.657232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.657629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.657659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.658036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.658068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.658422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.658453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.658811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.658841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.659112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.659141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.659524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.659553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.659927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.659957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.660292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.660323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.660695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.660724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.661080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.661109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.661481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.661521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.661884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.661913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.662284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.662315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.662674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.662703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.663083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.663111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.663530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.663562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.663826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.663855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.197 qpair failed and we were unable to recover it. 00:32:54.197 [2024-11-29 13:16:56.664204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.197 [2024-11-29 13:16:56.664235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.664497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.664526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.664780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.664809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.665204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.665237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.665676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.665705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.666108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.666137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.666514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.666546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.666940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.666970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.667333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.667364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.667742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.667771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.668147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.668205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.668582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.668611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.668977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.669008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.669384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.669417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.669773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.669803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.670174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.670205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.670544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.670573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.670823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.670852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.671247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.671277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.671648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.671677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.672028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.672058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.672404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.672435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.672796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.672825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.673191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.673221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.673575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.673605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.673967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.673996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.674369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.674399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.674762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.674790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.675171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.675202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.675595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.675625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.675970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.676001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.676355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.676387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.676750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.676779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.677147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.677196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.677588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.677618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.677982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.678013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.678387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.678418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.678779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.678808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.198 qpair failed and we were unable to recover it. 00:32:54.198 [2024-11-29 13:16:56.679182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.198 [2024-11-29 13:16:56.679213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.679582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.679613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.679980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.680011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.680270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.680301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.680652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.680682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.681047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.681077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.681440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.681470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.681720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.681749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.682101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.682130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.682536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.682567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.682936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.682967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.683204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.683234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.683577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.683606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.683950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.683981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.684324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.684357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.684618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.684651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.685039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.685070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.685431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.685461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.685827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.685856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.686201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.686233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.686619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.686648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.687009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.687039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.687412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.687450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.687686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.687715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.688099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.688128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.688517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.688547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.688899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.688930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.689283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.689314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.689683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.689712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.690072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.690103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.690509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.690539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.690874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.690905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.691130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.691174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.691571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.691602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.691939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.691969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.692344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.692383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.692759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.692788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.693154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.199 [2024-11-29 13:16:56.693198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.199 qpair failed and we were unable to recover it. 00:32:54.199 [2024-11-29 13:16:56.693577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.693609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.693948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.693978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.694351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.694381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.694720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.694749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.695112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.695141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.695501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.695532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.695897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.695928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.696292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.696323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.696697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.696728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.697198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.697230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.697564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.697595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.697971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.698002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.698359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.698389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.698624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.698655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.699019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.699049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.699398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.699428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.699787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.699815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.700189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.700222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.700596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.700625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.700964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.700993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.701356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.701387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.701737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.701765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.702116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.702145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.702517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.702547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.702908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.702937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.703307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.703338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.703591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.703623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.704010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.704041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.704399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.704430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.704788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.704819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.705199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.705230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.705613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.705642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.705992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.706023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.706391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.706423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.706795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.706825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.707062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.707091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.707455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.707486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.707850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.200 [2024-11-29 13:16:56.707886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.200 qpair failed and we were unable to recover it. 00:32:54.200 [2024-11-29 13:16:56.708237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.708269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.708502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.708534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.708888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.708918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.709276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.709308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.709583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.709614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.709960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.709989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.710347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.710378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.710751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.710781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.711152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.711216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.711580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.711610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.711861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.711891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.712188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.712220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.712652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.712682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.713047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.713077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.713417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.713448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.713680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.713712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.714071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.714100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.714343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.714377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.714771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.714800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.715172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.715202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.715453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.715482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.715837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.715867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.716227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.716258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.716639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.716669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.717037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.717066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.717505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.717535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.717840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.717870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.718255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.718288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.718640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.718672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.719025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.719055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.719420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.719451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.719813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.719842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.720207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.720238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.720490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.201 [2024-11-29 13:16:56.720520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.201 qpair failed and we were unable to recover it. 00:32:54.201 [2024-11-29 13:16:56.720854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.720884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.721254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.721286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.721651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.721679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.722034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.722064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.722419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.722450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.722809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.722845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.723206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.723236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.723575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.723604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.723976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.724005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.724364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.724394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.724764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.724794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.725154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.725209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.725453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.725486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.725863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.725892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.726272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.726303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.726551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.726581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.726814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.726846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.727218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.727249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.727646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.727676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.728095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.728125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.728368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.728399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.728753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.728782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.729079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.729108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.729486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.729517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.729759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.729790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.730151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.730208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.730607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.730637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.730986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.731017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.731280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.731312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.731694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.731724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.732094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.732124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.732495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.732527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.732908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.732939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.733188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.733220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.733668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.733698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.733951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.733982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.734343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.734374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.734719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.202 [2024-11-29 13:16:56.734749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.202 qpair failed and we were unable to recover it. 00:32:54.202 [2024-11-29 13:16:56.735125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.735153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.735562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.735592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.735966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.735996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.736255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.736289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.736663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.736694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.737055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.737084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.737461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.737492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.737860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.737896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.738129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.738170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.738556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.738586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.738952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.738983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.739353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.739386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.739755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.739785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.740155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.740201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.740586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.740616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.740961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.740991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.741370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.741401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.741755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.741785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.742060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.742089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.742459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.742489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.742850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.742882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.743067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.743097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.743463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.743495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.743744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.743772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.744130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.744171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.744544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.744575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.744950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.744980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.745342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.745374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.745714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.745743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.746105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.746134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.746524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.746554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.746899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.746929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.747318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.747348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.747742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.747772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.748121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.748168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.748525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.203 [2024-11-29 13:16:56.748554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.203 qpair failed and we were unable to recover it. 00:32:54.203 [2024-11-29 13:16:56.748802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.748834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.749184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.749217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.749633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.749663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.750013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.750045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.750283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.750314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.750699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.750728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.751092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.751123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.751404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.751435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.751763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.751794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.752046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.752076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.752498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.752529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.752903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.752934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.753304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.753335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.753706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.753735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.754102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.754131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.754549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.754581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.754944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.754973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.755316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.755346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.755714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.755743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.756115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.756144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.756518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.756549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.756899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.756928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.757271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.757303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.757580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.757608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.757962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.757992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.758332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.758364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.758620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.758650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.759051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.759081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.759436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.759468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.759817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.759845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.760218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.760249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.760573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.760602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.760968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.760996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.761423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.761454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.761820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.761849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.762219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.762249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.762628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.762656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.763043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.763072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.204 qpair failed and we were unable to recover it. 00:32:54.204 [2024-11-29 13:16:56.763435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.204 [2024-11-29 13:16:56.763477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.763831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.763861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.764201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.764233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.764619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.764648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.764998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.765029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.765393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.765424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.765798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.765828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.766187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.766219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.766595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.766624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.766860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.766892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.767236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.767267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.767621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.767651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.768011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.768039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.768407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.768439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.768803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.768833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.769197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.769228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.769478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.769507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.769927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.769958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.770326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.770357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.770595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.770624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.770962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.770992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.771214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.771245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.771619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.771649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.771899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.771928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.772273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.772305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.772689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.772720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.773066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.773097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.773350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.773382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.773726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.773757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.774131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.774175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.774532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.774562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.774927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.774958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.775404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.775435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.775662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.775690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.776004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.776032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.776427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.776458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.776827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.776856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.777223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.777254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.205 [2024-11-29 13:16:56.777526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.205 [2024-11-29 13:16:56.777554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.205 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.777894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.777923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.778279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.778316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.778551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.778583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.778925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.778957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.779297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.779328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.779692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.779720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.780151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.780207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.780444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.780476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.780811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.780841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.781217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.781247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.781582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.781613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.781895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.781925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.782303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.782333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.782586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.782616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.783000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.783029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.783389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.783420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.783784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.783814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.784182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.784215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.784591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.784621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.784986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.785017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.785270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.785300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.785661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.785691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.786068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.786097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.786477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.786507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.786749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.786778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.787143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.787191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.787568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.787598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.787967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.787997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.788387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.788419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.788792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.788822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.789182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.789213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.789550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.789580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.789940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.206 [2024-11-29 13:16:56.789971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.206 qpair failed and we were unable to recover it. 00:32:54.206 [2024-11-29 13:16:56.790206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.790236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.790609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.790640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.791012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.791041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.791397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.791430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.791853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.791882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.792220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.792251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.792614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.792644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.792961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.792993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.793337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.793373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.793748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.793778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.794037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.794068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.794325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.794356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.794748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.794777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.795152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.795195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.795439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.795468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.795846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.795875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.796240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.796272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.796623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.796652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.796915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.796945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.797306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.797336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.797703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.797732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.798104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.798134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.798485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.798514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.798881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.798911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.799297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.799328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.799697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.799727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.800141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.800183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.800559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.800589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.800813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.800841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.801194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.801226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.801494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.801524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.801924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.801953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.802321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.802352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.802722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.802752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.803118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.803148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.803517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.803548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.803910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.803940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.804317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.207 [2024-11-29 13:16:56.804348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.207 qpair failed and we were unable to recover it. 00:32:54.207 [2024-11-29 13:16:56.804613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.804641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.804866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.804899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.805292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.805322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.805710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.805738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.805996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.806025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.806390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.806420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.806768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.806797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.807168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.807199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.807554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.807583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.807960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.807988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.808258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.808294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.808671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.808701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.809069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.809098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.809360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.809390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.809756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.809785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.810148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.810190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.810538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.810568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.810950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.810979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.811346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.811377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.811734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.811763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.812123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.812153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.812541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.812572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.812946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.812975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.813341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.813373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.813735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.813767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.814117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.814146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.814505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.814534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.814788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.814822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.815183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.815214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.815567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.815596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.815948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.815977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.816344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.816373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.816750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.816779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.817140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.817195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.817554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.817584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.817948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.817978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.818351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.818382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.818750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.208 [2024-11-29 13:16:56.818781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.208 qpair failed and we were unable to recover it. 00:32:54.208 [2024-11-29 13:16:56.819041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.819071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.819457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.819489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.819842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.819872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.820253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.820283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.820641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.820670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.821010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.821040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.821460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.821489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.821843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.821875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.822233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.822264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.822609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.822640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.823005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.823034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.823400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.823430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.823795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.823830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.824171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.824202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.824441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.824470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.824710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.824742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.825095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.825124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.825489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.825519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.825760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.825793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.826146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.826190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.826477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.826508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.826900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.826930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.827272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.827305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.827671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.827702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.828055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.828085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.828462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.828492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.828861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.828891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.829256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.829287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.829645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.829674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.830047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.830076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.830458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.830488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.830851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.830881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.831253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.831284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.831633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.831664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.832021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.832050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.832311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.832342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.832635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.832665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.833024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.209 [2024-11-29 13:16:56.833053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.209 qpair failed and we were unable to recover it. 00:32:54.209 [2024-11-29 13:16:56.833292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.833325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.833689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.833719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.834070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.834098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.834463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.834493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.834854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.834884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.835254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.835286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.835731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.835760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.836122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.836152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.836538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.836568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.836880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.836910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.837276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.837306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.837676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.837706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.838055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.838085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.838494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.838524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.838862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.838904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.839129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.839176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.839524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.839555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.839912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.839943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.841824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.841888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.842198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.842236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.842608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.842637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.843004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.843033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.843409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.843446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.843808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.843837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.844192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.844224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.844575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.844604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.844977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.845008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.845371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.845403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.845792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.845822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.845977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.846009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.846414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.846445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.846807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.846837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.847198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.847230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.847605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.210 [2024-11-29 13:16:56.847635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.210 qpair failed and we were unable to recover it. 00:32:54.210 [2024-11-29 13:16:56.847994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.211 [2024-11-29 13:16:56.848023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.211 qpair failed and we were unable to recover it. 00:32:54.211 [2024-11-29 13:16:56.848405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.211 [2024-11-29 13:16:56.848436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.211 qpair failed and we were unable to recover it. 00:32:54.211 [2024-11-29 13:16:56.848814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.211 [2024-11-29 13:16:56.848844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.211 qpair failed and we were unable to recover it. 00:32:54.211 [2024-11-29 13:16:56.849238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.211 [2024-11-29 13:16:56.849269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.211 qpair failed and we were unable to recover it. 00:32:54.211 [2024-11-29 13:16:56.849626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.211 [2024-11-29 13:16:56.849656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.211 qpair failed and we were unable to recover it. 00:32:54.211 [2024-11-29 13:16:56.850017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.211 [2024-11-29 13:16:56.850047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.211 qpair failed and we were unable to recover it. 00:32:54.211 [2024-11-29 13:16:56.850389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.211 [2024-11-29 13:16:56.850420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.211 qpair failed and we were unable to recover it. 00:32:54.211 [2024-11-29 13:16:56.850783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.483 [2024-11-29 13:16:56.850812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.483 qpair failed and we were unable to recover it. 00:32:54.483 [2024-11-29 13:16:56.851058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.483 [2024-11-29 13:16:56.851090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.483 qpair failed and we were unable to recover it. 00:32:54.483 [2024-11-29 13:16:56.851460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.483 [2024-11-29 13:16:56.851492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.483 qpair failed and we were unable to recover it. 00:32:54.483 [2024-11-29 13:16:56.851862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.483 [2024-11-29 13:16:56.851891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.483 qpair failed and we were unable to recover it. 00:32:54.483 [2024-11-29 13:16:56.852250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.483 [2024-11-29 13:16:56.852280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.483 qpair failed and we were unable to recover it. 00:32:54.483 [2024-11-29 13:16:56.852645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.483 [2024-11-29 13:16:56.852675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.483 qpair failed and we were unable to recover it. 00:32:54.483 [2024-11-29 13:16:56.853035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.853064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.853316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.853348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.853739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.853771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.854137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.854176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.854535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.854563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.854941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.854971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.855216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.855251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.855612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.855657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.856021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.856051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.856389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.856421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.856780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.856809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.857173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.857204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.857612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.857641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.857988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.858019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.858395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.858426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.858783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.858812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.859183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.859213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.859575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.859606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.859964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.859994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.860242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.860273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.860643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.860672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.860929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.860958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.861308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.861342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.861581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.861613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.861966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.861995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.862354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.862387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.862752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.862782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.863126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.863156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.484 qpair failed and we were unable to recover it. 00:32:54.484 [2024-11-29 13:16:56.863555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.484 [2024-11-29 13:16:56.863585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.863957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.863988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.864237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.864270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.864628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.864658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.865020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.865049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.865395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.865425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.865791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.865820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.866192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.866223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.866509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.866538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.866892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.866923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.867286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.867317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.867678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.867706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.868088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.868119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.868496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.868526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.868876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.868907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.869270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.869302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.869640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.869669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.870039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.870068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.870415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.870445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.870810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.870845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.871185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.871216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.871575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.871605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.871886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.871915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.872280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.872311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.872658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.872688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.873046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.873076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.873417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.873448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.873813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.485 [2024-11-29 13:16:56.873844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.485 qpair failed and we were unable to recover it. 00:32:54.485 [2024-11-29 13:16:56.874217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.874248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.874608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.874637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.874996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.875025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.875383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.875414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.875796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.875827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.876055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.876085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.876427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.876457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.876823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.876852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.877199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.877231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.877571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.877601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.877865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.877895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.878244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.878275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.878611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.878642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.878995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.879024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.879425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.879455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.879694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.879726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.880147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.880192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.880611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.880642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.880999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.881031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.881396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.881430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.881673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.881706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.881988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.882018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.882344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.882374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.882739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.882769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.883129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.883171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.883535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.883565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.883923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.883952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.884319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.884349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.884722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.884752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.885080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.885110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.486 qpair failed and we were unable to recover it. 00:32:54.486 [2024-11-29 13:16:56.885456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.486 [2024-11-29 13:16:56.885488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.885842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.885877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.886241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.886272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.886666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.886695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.887054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.887085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.887449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.887480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.887840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.887869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.888171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.888202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.888587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.888616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.888966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.888995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.889366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.889396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.889750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.889780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.890144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.890187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.890604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.890632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.890885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.890916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.891282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.891315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.891558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.891590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.891859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.891889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.892238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.892269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.892629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.892659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.893017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.893047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.893289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.893320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.893692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.893721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.893984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.894014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.894424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.894455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.894811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.894840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.895200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.895231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.895599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.895629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.895989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.896020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.487 [2024-11-29 13:16:56.896398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.487 [2024-11-29 13:16:56.896429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.487 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.896790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.896820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.897177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.897210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.897582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.897613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.897972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.898003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.898375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.898410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.898704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.898738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.899068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.899098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.899461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.899493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.899753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.899782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.900038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.900071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.900341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.900371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.900803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.900841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.901187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.901220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.901553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.901583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.901954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.901984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.902335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.902365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.902728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.902759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.903115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.903145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.903517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.903546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.903914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.903945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.904312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.904343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.904695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.904726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.905090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.905121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.905498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.905529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.905873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.905903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.906197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.906229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.906597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.906626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.906987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.488 [2024-11-29 13:16:56.907017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.488 qpair failed and we were unable to recover it. 00:32:54.488 [2024-11-29 13:16:56.907386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.907417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.907778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.907808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.908173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.908205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.908533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.908567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.908910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.908942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.909210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.909242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.909584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.909615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.909965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.909996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.910363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.910395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.910754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.910784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.911149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.911191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.911435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.911465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.911818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.911847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.912207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.912237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.912487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.912516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.912884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.912913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.913271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.913301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.913627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.913658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.914010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.914040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.914386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.914417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.914774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.914804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.915198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.915230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.915514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.489 [2024-11-29 13:16:56.915543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.489 qpair failed and we were unable to recover it. 00:32:54.489 [2024-11-29 13:16:56.915904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.915945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.916326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.916357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.916720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.916749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.917109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.917139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.917399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.917428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.917783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.917812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.918195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.918227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.918613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.918643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.918990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.919020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.919381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.919412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.919813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.919842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.920205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.920235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.920614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.920643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.921007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.921036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.921382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.921413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.921771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.921800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.922174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.922206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.922581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.922611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.922879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.922909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.923259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.923290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.923680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.923710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.924060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.924089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.924433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.924463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.924817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.924846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.925208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.925238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.925575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.925605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.926041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.926070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.926403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.926435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.490 [2024-11-29 13:16:56.926774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.490 [2024-11-29 13:16:56.926803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.490 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.927033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.927065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.927422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.927453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.927852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.927882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.928241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.928272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.928633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.928663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.929007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.929037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.929420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.929450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.929812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.929841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.930207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.930238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.930604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.930635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.930987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.931016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.931387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.931424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.931687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.931716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.932069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.932098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.932464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.932496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.932747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.932776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.933184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.933215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.933461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.933490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.933837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.933866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.934229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.934262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.934633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.934663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.935025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.935055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.935410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.935442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.935806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.935836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.936203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.936233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.936597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.936629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.937006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.937035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.937385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.491 [2024-11-29 13:16:56.937417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.491 qpair failed and we were unable to recover it. 00:32:54.491 [2024-11-29 13:16:56.937807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.937838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.938197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.938227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.938615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.938644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.939016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.939045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.939399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.939431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.939799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.939828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.939990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.940023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.940422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.940452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.940797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.940828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.941193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.941224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.941596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.941625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.941978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.942008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.942377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.942407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.942771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.942801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.943152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.943198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.943612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.943643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.943940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.943970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.944339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.944370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.944591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.944621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.944968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.944998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.945371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.945401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.945833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.945862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.946268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.946300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.946645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.946675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.947088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.947118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.947480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.947512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.947855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.947884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.948238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.948271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.948599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.948629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.492 qpair failed and we were unable to recover it. 00:32:54.492 [2024-11-29 13:16:56.948994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.492 [2024-11-29 13:16:56.949024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.949388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.949418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.949775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.949804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.950172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.950202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.950537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.950568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.950911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.950942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.951293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.951325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.951680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.951714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.952075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.952105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.952528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.952559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.952924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.952954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.953321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.953353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.953719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.953750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.953988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.954020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.954451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.954482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.954822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.954852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.955223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.955254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.955614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.955645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.956004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.956034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.956380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.956411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.956772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.956801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.957178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.957216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.957592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.957622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.957994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.958024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.958430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.958462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.958897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.958925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.959284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.959315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.959650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.959680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.493 [2024-11-29 13:16:56.960045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.493 [2024-11-29 13:16:56.960073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.493 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.960425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.960456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.960808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.960838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.961196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.961226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.961577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.961607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.961966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.961996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.962273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.962305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.962543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.962575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.962830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.962859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.963219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.963251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.963619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.963648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.964016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.964046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.964385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.964415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.964776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.964806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.965178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.965209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.965483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.965512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.965901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.965929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.966297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.966328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.966669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.966698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.967067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.967097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.967545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.967576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.967930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.967960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.968319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.968349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.968611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.968640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.968990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.969020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.969389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.969421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.969778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.969806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.970232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.970263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.970636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.970675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.971035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.971064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.494 [2024-11-29 13:16:56.971410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.494 [2024-11-29 13:16:56.971442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.494 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.971814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.971843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.972217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.972250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.972623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.972659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.973014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.973043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.973382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.973413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.973817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.973848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.974206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.974236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.974587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.974617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.974978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.975007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.975386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.975415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.975778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.975807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.976189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.976220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.976491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.976523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.976884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.976913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.977284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.977314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.977680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.977709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.978082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.978113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.978538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.978568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.978916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.978947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.979331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.979362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.979599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.979632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.979995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.980027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.495 qpair failed and we were unable to recover it. 00:32:54.495 [2024-11-29 13:16:56.980388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.495 [2024-11-29 13:16:56.980420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.980796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.980830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.981227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.981258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.981614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.981646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.981995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.982025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.982209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.982243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.982676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.982707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.983061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.983093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.983453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.983485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.983848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.983878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.984117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.984147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.984568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.984599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.984948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.984979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.985329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.985362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.985729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.985760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.986061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.986091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.986452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.986483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.986862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.986892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.987279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.987312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.987673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.987703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.987934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.987977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.988375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.988409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.988812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.988844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.989211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.989243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.989672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.989703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.990088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.990120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.990522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.990553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.990946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.990976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.991361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.496 [2024-11-29 13:16:56.991393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.496 qpair failed and we were unable to recover it. 00:32:54.496 [2024-11-29 13:16:56.991755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.991785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.992131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.992174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.992534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.992565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.992927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.992956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.993294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.993325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.993717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.993747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.994102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.994134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.994490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.994521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.994871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.994901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.995146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.995191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.995568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.995598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.995893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.995924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.996276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.996308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.996652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.996682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.997049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.997079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.997345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.997379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.997730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.997760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.998131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.998171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.998559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.998589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.998952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.998983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.999349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.999379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:56.999757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:56.999786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:57.000151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:57.000210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:57.000564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:57.000594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:57.000957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:57.000987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:57.001370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:57.001400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:57.001758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:57.001789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:57.002148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:57.002189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:57.002544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.497 [2024-11-29 13:16:57.002572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.497 qpair failed and we were unable to recover it. 00:32:54.497 [2024-11-29 13:16:57.002798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.002831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.003092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.003122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.003505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.003543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.003922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.003953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.004313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.004344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.004684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.004712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.005121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.005151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.005408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.005438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.005836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.005865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.006228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.006259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.006705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.006734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.007003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.007031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.007420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.007450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.007810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.007841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.008087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.008116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.008444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.008476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.008840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.008870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.009239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.009269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.009690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.009720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.010094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.010126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.010516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.010548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.010956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.010986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.011342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.011373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.011736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.011765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.012116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.012146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.012526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.012555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.012899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.012928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.013304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.013334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.013572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.013605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.498 [2024-11-29 13:16:57.013997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.498 [2024-11-29 13:16:57.014028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.498 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.014272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.014303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.014688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.014718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.015058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.015088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.015476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.015515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.015880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.015910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.016202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.016234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.016660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.016689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.017038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.017070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.017409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.017440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.017845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.017875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.018238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.018268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.018545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.018574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.018836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.018873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.019244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.019275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.019610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.019639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.019895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.019927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.020297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.020328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.020694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.020722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.021129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.021169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.021573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.021602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.021851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.021884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.022270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.022301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.022571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.022600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.022972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.023002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.023387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.023418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.023801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.023831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.024096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.024125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.024505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.024536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.024894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.024922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.499 qpair failed and we were unable to recover it. 00:32:54.499 [2024-11-29 13:16:57.025293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.499 [2024-11-29 13:16:57.025324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.025579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.025609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.025973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.026004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.026374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.026405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.026663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.026696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.026958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.026988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.027232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.027263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.027628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.027658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.028025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.028054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.028500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.028530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.028883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.028913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.029129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.029175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.029553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.029583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.029960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.029989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.030230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.030264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.030595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.030626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.030998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.031026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.031388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.031419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.031796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.031826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.032196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.032226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.032338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.032365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.032694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.032723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.033180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.033211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.033577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.033612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.033973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.034001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.034400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.034430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.034683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.034711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.035153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.035198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.035594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.035623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.500 qpair failed and we were unable to recover it. 00:32:54.500 [2024-11-29 13:16:57.035984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.500 [2024-11-29 13:16:57.036012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.036408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.036438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.036804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.036833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.037063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.037092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.037457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.037489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.037867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.037896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.038266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.038297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.038659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.038689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.039054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.039084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.039455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.039487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.039730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.039761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.040197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.040230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.040589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.040619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.041003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.041033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.041187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.041219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.041465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.041495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.041748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.041779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.042150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.042197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.042553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.042584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.042851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.042881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.043278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.043310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.043576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.043608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.043845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.043874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.044259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.501 [2024-11-29 13:16:57.044290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.501 qpair failed and we were unable to recover it. 00:32:54.501 [2024-11-29 13:16:57.044621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.044653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.044940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.044971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.045222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.045257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.045598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.045630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.045971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.046001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.046245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.046278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.046651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.046681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.046911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.046941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.047197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.047229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.047604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.047634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.047988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.048025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.048289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.048321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.048705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.048735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.049091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.049123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.049487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.049520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.049876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.049906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.050273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.050305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.050673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.050703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.051056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.051087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.051453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.051485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.051843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.051873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.052230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.052261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.052629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.052659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.052903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.052937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.053292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.053324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.053699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.053730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.054093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.054124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.054520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.054552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.054913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.054942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.055227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.055260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.055636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.055667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.055912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.502 [2024-11-29 13:16:57.055946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.502 qpair failed and we were unable to recover it. 00:32:54.502 [2024-11-29 13:16:57.056320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.056351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.056727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.056757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.057128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.057171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.057516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.057545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.057841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.057880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.058218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.058252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.058503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.058532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.058886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.058916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.059279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.059310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.059679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.059709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.059958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.059990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.060281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.060312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.060671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.060700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.061060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.061089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.061502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.061533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.061889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.061918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.062315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.062345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.062740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.062770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.063114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.063151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.063572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.063603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.063964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.063995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.064366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.064398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.064765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.064794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.065173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.065204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.065559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.065589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.065950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.065979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.066338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.066368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.066725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.066755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.067172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.503 [2024-11-29 13:16:57.067204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.503 qpair failed and we were unable to recover it. 00:32:54.503 [2024-11-29 13:16:57.067547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.067576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.067933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.067961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.068344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.068375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.068719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.068751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.069114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.069144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.069517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.069548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.069909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.069938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.070306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.070336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.070696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.070728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.071094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.071123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.071516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.071547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.071917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.071949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.072305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.072336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.072712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.072741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.073077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.073107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.073380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.073410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.073782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.073811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.074066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.074095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.074454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.074484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.074844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.074873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.075239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.075269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.075650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.075678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.076040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.076072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.076432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.076463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.076828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.076859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.077230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.077260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.077633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.077661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.078021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.078050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.078419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.078450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.504 [2024-11-29 13:16:57.078868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.504 [2024-11-29 13:16:57.078903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.504 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.079237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.079268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.079634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.079663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.080029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.080058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.080424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.080454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.080808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.080837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.081215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.081246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.081606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.081636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.081979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.082008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.082374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.082404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.082779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.082810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.083173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.083204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.083545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.083575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.083942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.083972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.084332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.084364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.084716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.084746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.085122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.085152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.085596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.085625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.085982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.086012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.086389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.086420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.086777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.086805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.087182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.087214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.087587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.087615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.087980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.088011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.088392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.088423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.088782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.088811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.089155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.089197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.089553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.089584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.089945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.089975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.505 qpair failed and we were unable to recover it. 00:32:54.505 [2024-11-29 13:16:57.090339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.505 [2024-11-29 13:16:57.090370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.090739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.090769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.091035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.091063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.091420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.091450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.091821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.091851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.092213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.092244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.092511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.092543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.092924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.092953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.093315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.093345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.093703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.093733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.094151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.094194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.094600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.094636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.094972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.095002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.095357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.095387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.095737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.095766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.096129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.096178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.096520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.096550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.096923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.096953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.097315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.097346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.097711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.097740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.098092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.098123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.098497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.098527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.098896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.098926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.099288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.099319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.099696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.099725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.100154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.100201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.100599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.100628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.101006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.101035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.101391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.506 [2024-11-29 13:16:57.101421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.506 qpair failed and we were unable to recover it. 00:32:54.506 [2024-11-29 13:16:57.101791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.101821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.102180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.102210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.102590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.102620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.102980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.103010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.103345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.103375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.103768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.103798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.104185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.104215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.104581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.104609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.104973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.105002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.105262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.105293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.105694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.105723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.106064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.106094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.106471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.106502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.106871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.106900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.107264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.107297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.107623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.107654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.107995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.108024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.108382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.108413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.108770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.108798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.109144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.109188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.109542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.109572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.109939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.109969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.110334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.110371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.110709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.110738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.507 [2024-11-29 13:16:57.111078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.507 [2024-11-29 13:16:57.111107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.507 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.111479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.111510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.111852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.111882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.112243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.112273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.112611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.112640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.113037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.113065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.113428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.113457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.113816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.113845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.114208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.114239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.114564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.114593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.114906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.114937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.115300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.115330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.115691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.115720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.116086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.116116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.116530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.116561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.116923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.116953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.117308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.117339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.117691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.117721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.118050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.118079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.118437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.118469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.118821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.118851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.119220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.119251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.119616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.119644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.120002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.120031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.120383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.120415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.120775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.120804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.121179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.121210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.121572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.121601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.508 [2024-11-29 13:16:57.121966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.508 [2024-11-29 13:16:57.121994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.508 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.122338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.122369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.122733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.122762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.123112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.123143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.123525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.123554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.123917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.123947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.124310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.124340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.124588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.124616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.124968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.124997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.125376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.125406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.125660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.125693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.126074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.126103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.126469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.126501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.126746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.126774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.127132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.127180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.127539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.127569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.127985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.128015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.128300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.128331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.128718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.128747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.129105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.129133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.129384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.129417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.129784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.129813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.130180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.130213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.130469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.130498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.130843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.130874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.131243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.131274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.131671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.131700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.132067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.132096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.132456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.132489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.132843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.509 [2024-11-29 13:16:57.132873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.509 qpair failed and we were unable to recover it. 00:32:54.509 [2024-11-29 13:16:57.133205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.133236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.133610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.133640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.134086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.134114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.134480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.134510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.134848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.134878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.135241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.135272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.135639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.135668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.135982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.136011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.136369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.136401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.136773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.136803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.137174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.137205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.137560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.137590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.137949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.137981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.138227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.138260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.138622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.138651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.138910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.138939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.139190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.139224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.139602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.139630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.139996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.140025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.140380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.140411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.140773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.140802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.141140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.141180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.141484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.141512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.141880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.141909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.142272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.142302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.142673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.142701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.143069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.143098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.143466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.143497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.143907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.510 [2024-11-29 13:16:57.143936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.510 qpair failed and we were unable to recover it. 00:32:54.510 [2024-11-29 13:16:57.144292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.144323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.144659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.144688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.145048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.145077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.145440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.145470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.145838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.145868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.146288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.146318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.146670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.146700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.147054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.147082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.147275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.147305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.147699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.147728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.148093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.148124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.148491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.148522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.148892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.148922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.149275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.149306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.149657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.149687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.150045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.150074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.511 [2024-11-29 13:16:57.150423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.511 [2024-11-29 13:16:57.150455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.511 qpair failed and we were unable to recover it. 00:32:54.783 [2024-11-29 13:16:57.152387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.783 [2024-11-29 13:16:57.152461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.783 qpair failed and we were unable to recover it. 00:32:54.783 [2024-11-29 13:16:57.152858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.783 [2024-11-29 13:16:57.152904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.783 qpair failed and we were unable to recover it. 00:32:54.783 [2024-11-29 13:16:57.153156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.783 [2024-11-29 13:16:57.153204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.783 qpair failed and we were unable to recover it. 00:32:54.783 [2024-11-29 13:16:57.153593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.783 [2024-11-29 13:16:57.153624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.783 qpair failed and we were unable to recover it. 00:32:54.783 [2024-11-29 13:16:57.153991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.783 [2024-11-29 13:16:57.154023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.783 qpair failed and we were unable to recover it. 00:32:54.783 [2024-11-29 13:16:57.154277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.154308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.154673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.154704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.154893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.154927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.155202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.155236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.156865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.156922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.157241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.157274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.157513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.157543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.157825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.157856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.158095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.158129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.158614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.158646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.159061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.159092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.159456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.159488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.159841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.159872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.160124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.160156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.160567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.160598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.160809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.160844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.161191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.161221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.161655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.161686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.162043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.162077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.162437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.162470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.162805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.162836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.163187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.163220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.163588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.163619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.164072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.164103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.164494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.164526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.164874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.164904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.165186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.165218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.165457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.165489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.165741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.165772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.784 qpair failed and we were unable to recover it. 00:32:54.784 [2024-11-29 13:16:57.166152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.784 [2024-11-29 13:16:57.166193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.166585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.166616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.166971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.167002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.167359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.167391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.167742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.167772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.168152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.168194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.168510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.168541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.168882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.168920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.169290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.169321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.169698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.169727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.170082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.170111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.170541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.170571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.170821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.170850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.171084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.171116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.171520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.171551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.171895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.171926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.172274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.172306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.172696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.172725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.173100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.173130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.173528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.173558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.173984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.174015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.174477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.174509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.174856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.174886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.175240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.175271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.175659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.175689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.176047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.176076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.176376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.176407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.176764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.176794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.785 [2024-11-29 13:16:57.177153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.785 [2024-11-29 13:16:57.177194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.785 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.177609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.177639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.178019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.178050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.178490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.178521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.178754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.178788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.179141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.179189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.179587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.179618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.179999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.180029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.180462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.180494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.180838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.180868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.181248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.181279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.181661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.181690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.181952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.181982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.182421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.182453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.182799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.182829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.183208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.183239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.183643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.183672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.184028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.184058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.184407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.184439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.184802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.184839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.185203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.185233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.185523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.185552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.185894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.185924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.186281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.186311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.186695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.186724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.186973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.187007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.187428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.187458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.187817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.187847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.188239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.188270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.786 [2024-11-29 13:16:57.188530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.786 [2024-11-29 13:16:57.188559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.786 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.188919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.188949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.189339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.189370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.189719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.189749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.190092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.190121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.190538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.190569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.190931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.190960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.191327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.191358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.191709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.191738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.191901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.191930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.192253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.192286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.192661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.192692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.193111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.193140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.193491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.193521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.193777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.193806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.194053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.194083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.194478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.194511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.194920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.194950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.195316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.195347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.195600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.195630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.196001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.196030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.196454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.196485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.196827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.196857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.197205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.197235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.197631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.197660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.198039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.198068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.198342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.198371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.198623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.198652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.199081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.199110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.199502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.787 [2024-11-29 13:16:57.199533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.787 qpair failed and we were unable to recover it. 00:32:54.787 [2024-11-29 13:16:57.199903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.199945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.200248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.200279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.200635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.200664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.201036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.201065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.201443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.201474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.201898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.201927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.202255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.202285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.202564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.202593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.202944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.202973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.203351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.203382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.203754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.203782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.204131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.204171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.204469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.204498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.204870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.204899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.205256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.205287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.205677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.205715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.206058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.206087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.206472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.206504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.206845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.206876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.207244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.207273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.207649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.207678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.208041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.208070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.208537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.208569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.208908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.208939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.209187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.209218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.209566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.209596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.209836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.209868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.210250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.210281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.788 qpair failed and we were unable to recover it. 00:32:54.788 [2024-11-29 13:16:57.210639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.788 [2024-11-29 13:16:57.210669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.211035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.211064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.211459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.211490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.211854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.211883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.212221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.212251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.212634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.212662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.213018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.213049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.213420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.213451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.213819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.213849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.214202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.214233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.214595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.214626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.214976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.215006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.215268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.215304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.215681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.215710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.216069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.216100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.216407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.216438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.216812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.216842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.217092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.217125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.217506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.217537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.217903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.217931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.218250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.218282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.218658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.218687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.219047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.219077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.219444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.789 [2024-11-29 13:16:57.219476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.789 qpair failed and we were unable to recover it. 00:32:54.789 [2024-11-29 13:16:57.219839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.219868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.220238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.220268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.220529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.220558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.220815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.220844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.221205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.221236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.221643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.221672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.222031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.222060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.222325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.222360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.222732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.222763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.223109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.223140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.223447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.223477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.223845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.223875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.224318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.224348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.224712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.224743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.225107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.225137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.225516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.225547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.225799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.225828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.226059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.226089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.226350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.226383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.226755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.226784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.227144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.227186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.227543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.227572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.227901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.227930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.228278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.228309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.228669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.228698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.229068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.229100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.229373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.229404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.229680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.229708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.230105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.790 [2024-11-29 13:16:57.230141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.790 qpair failed and we were unable to recover it. 00:32:54.790 [2024-11-29 13:16:57.230552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.230582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.230833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.230866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.231245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.231277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.231663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.231692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.231965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.231993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.232375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.232405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.232772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.232802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.233191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.233221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.233589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.233619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.234076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.234105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.234439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.234468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.234743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.234773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.235131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.235176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.235542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.235574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.235771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.235800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.236098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.236129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.236377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.236410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.236791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.236822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.237195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.237227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.237465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.237498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.237860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.237892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.238136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.238176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.238543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.238573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.238835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.238864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.239228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.239259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.239562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.239594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.239962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.239992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.240261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.240293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.791 [2024-11-29 13:16:57.240660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.791 [2024-11-29 13:16:57.240691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.791 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.241028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.241058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.241420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.241452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.241803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.241834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.242221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.242251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.242498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.242528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.242906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.242935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.243185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.243218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.243588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.243619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.243870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.243899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.244310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.244342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.244711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.244748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.245183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.245214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.245555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.245586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.245942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.245973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.246328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.246359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.246763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.246794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.246961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.246993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.247347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.247379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.247754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.247783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.248150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.248193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.248533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.248563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.248977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.249006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.249380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.249411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.249622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.249652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.249984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.250013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.250295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.250326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.250700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.250730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.251099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-11-29 13:16:57.251128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.792 qpair failed and we were unable to recover it. 00:32:54.792 [2024-11-29 13:16:57.251514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.251545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.251882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.251912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.252277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.252308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.252678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.252708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.253077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.253106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.253461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.253492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.253857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.253887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.254265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.254296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.254547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.254577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.254954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.254985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.255296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.255327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.255673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.255703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.256083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.256114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.256477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.256508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.256875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.256906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.257300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.257331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.257704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.257733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.258081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.258112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.258479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.258510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.258871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.258900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.259268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.259301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.259650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.259679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.260043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.260080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.260437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.260467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.260716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.260745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.261130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.261171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.261391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.261420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.261792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.261821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.262183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.262216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.262466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-11-29 13:16:57.262495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.793 qpair failed and we were unable to recover it. 00:32:54.793 [2024-11-29 13:16:57.262846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.262877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.263243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.263274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.263707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.263744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.264090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.264121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.264401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.264431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.264672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.264705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.265063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.265094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.265472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.265503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.265728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.265756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.266120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.266149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.266520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.266550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.266905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.266934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.267283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.267313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.267666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.267697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.267970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.268000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.268349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.268380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.268740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.268770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.269200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.269232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.269633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.269663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.270033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.270064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.270427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.270458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.270832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.270862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.271151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.271195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.271578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.271607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.271958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.271987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.272244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.272275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.272667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.272696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.273044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.273073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.273273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.273304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.794 [2024-11-29 13:16:57.273560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-11-29 13:16:57.273591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.794 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.273962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.273991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.274386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.274417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.274667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.274702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.274954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.274986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.275384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.275415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.275759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.275789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.276218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.276250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.276614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.276645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.277006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.277035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.277273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.277303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.277663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.277692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.278061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.278090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.278331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.278364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.278768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.278798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.279063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.279093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.279477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.279507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.279872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.279903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.280256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.280287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.280665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.280694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.281060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.281088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.281342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.281372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.281760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.281789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.282169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.282201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.282533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.795 [2024-11-29 13:16:57.282561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.795 qpair failed and we were unable to recover it. 00:32:54.795 [2024-11-29 13:16:57.282787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.282815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.283178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.283209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.283454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.283482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.283861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.283890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.284268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.284299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.284669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.284698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.285123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.285152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.285524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.285554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.285935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.285964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.286338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.286369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.286701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.286731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.287105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.287135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.287574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.287604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.287962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.287990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.288343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.288374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.288807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.288836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.289191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.289222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.289621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.289658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.290023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.290057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.290441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.290472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.290815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.290845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.291207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.291237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.291620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.291649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.292012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.292040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.292390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.292421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.292728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.292759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.293122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.293151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.293518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.293547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.293910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.293938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.294286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.294316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.294685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.796 [2024-11-29 13:16:57.294713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.796 qpair failed and we were unable to recover it. 00:32:54.796 [2024-11-29 13:16:57.295070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.295098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.295465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.295496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.295845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.295874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.296234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.296265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.296623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.296652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.297022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.297051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.297404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.297436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.297800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.297829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.298189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.298219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.298572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.298602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.298962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.298990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.299261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.299291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.299668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.299697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.300066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.300094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.300463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.300495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.300858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.300888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.301256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.301286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.301651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.301680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.302014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.302043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.302392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.302423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.302782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.302811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.303066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.303095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.303327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.303360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.303704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.303733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.304092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.304122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.304570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.304599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.304803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.304835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.305082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.305119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.305535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.305566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.797 qpair failed and we were unable to recover it. 00:32:54.797 [2024-11-29 13:16:57.305915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.797 [2024-11-29 13:16:57.305944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.306321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.306351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.306715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.306744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.307179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.307209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.307574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.307603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.307957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.307985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.308340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.308369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.308713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.308742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.309120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.309149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.309548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.309578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.309941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.309969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.310331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.310363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.310721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.310751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.311101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.311130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.311479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.311509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.311861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.311890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.312233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.312263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.312638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.312666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.313094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.313123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.313500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.313530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.313908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.313936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.314296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.314327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.314712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.314741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.315102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.315131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.315501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.315532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.315886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.315916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.316180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.316210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.316507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.316536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.316902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.798 [2024-11-29 13:16:57.316931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.798 qpair failed and we were unable to recover it. 00:32:54.798 [2024-11-29 13:16:57.317274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.317304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.317674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.317703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.318049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.318078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.318439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.318470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.318824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.318854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.319293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.319323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.319699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.319728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.320003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.320032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.320390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.320420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.320791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.320827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.321116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.321146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.321531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.321561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.321903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.321934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.322296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.322326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.322681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.322711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.323075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.323104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.323446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.323475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.323885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.323915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.324287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.324317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.324548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.324579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.324967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.324996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.325383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.325414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.325777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.325805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.326191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.326222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.326593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.326622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.326975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.327005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.327257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.327287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.327564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.327593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.327939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.799 [2024-11-29 13:16:57.327967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.799 qpair failed and we were unable to recover it. 00:32:54.799 [2024-11-29 13:16:57.328309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.328340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.328706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.328736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.328986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.329015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.329391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.329421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.329790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.329819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.330177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.330207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.330508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.330537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.330895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.330924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.331269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.331301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.331643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.331672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.332062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.332091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.332366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.332396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.332785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.332814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.333194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.333225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.333573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.333602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.333969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.333998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.334232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.334261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.334605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.334634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.335003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.335032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.335349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.335379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.335738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.335768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.336126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.336155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.336506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.336537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.336877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.336906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.337273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.337303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.337663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.337693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.338055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.338084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.338459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.338489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.800 [2024-11-29 13:16:57.338857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.800 [2024-11-29 13:16:57.338888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.800 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.339246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.339278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.339628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.339657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.340020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.340049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.340413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.340442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.340809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.340838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.341185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.341215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.341563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.341592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.341947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.341977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.342338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.342368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.342731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.342760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.343114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.343143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.343491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.343521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.343878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.343907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.344280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.344310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.344684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.344713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.345065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.345095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.345485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.345515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.345863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.345894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.346262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.346304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.346674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.346703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.347059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.347087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.347447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.801 [2024-11-29 13:16:57.347478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.801 qpair failed and we were unable to recover it. 00:32:54.801 [2024-11-29 13:16:57.347820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.347850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.348139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.348178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.348513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.348542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.348897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.348925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.349277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.349308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.349513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.349544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.349956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.349986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.350349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.350381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.350748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.350777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.351114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.351142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.351422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.351456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.351837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.351867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.352214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.352246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.352618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.352647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.353013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.353041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.353418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.353457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.353793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.353821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.354194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.354224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.354600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.354629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.355003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.355032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.355420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.355450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.355813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.355842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.356233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.356262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.356660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.356691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.357048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.357079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.357410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.357440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.357801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.357831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.358125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.358155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.802 [2024-11-29 13:16:57.358529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.802 [2024-11-29 13:16:57.358558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.802 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.358927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.358956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.359321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.359351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.359705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.359734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.359968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.360000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.360261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.360292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.360604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.360634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.360997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.361027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.361386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.361423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.361797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.361827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.362189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.362220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.362574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.362602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.362964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.362993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.363365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.363396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.363628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.363657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.364032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.364061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.364417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.364447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.364805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.364834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.365214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.365244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.365626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.365655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.366001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.366031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.366380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.366410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.366764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.366793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.367029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.367061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.367433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.367464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.367843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.367872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.368264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.368294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.368692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.368722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.369082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.369110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.369469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.803 [2024-11-29 13:16:57.369499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.803 qpair failed and we were unable to recover it. 00:32:54.803 [2024-11-29 13:16:57.369868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.369897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.370156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.370199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.370563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.370593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.370972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.371001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.371337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.371367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.371618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.371651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.372033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.372063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.372416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.372447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.372806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.372835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.373215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.373245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.373486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.373518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.373778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.373808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.374151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.374200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.374564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.374592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.374950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.374979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.375338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.375370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.375712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.375741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.375986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.376015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.376389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.376426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.376808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.376837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.377198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.377227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.377633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.377663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.378018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.378047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.378421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.378451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.378828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.378857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.379228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.379259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.379629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.379657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.380008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.380037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.380385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.380415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.380792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.380821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.381183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.804 [2024-11-29 13:16:57.381213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.804 qpair failed and we were unable to recover it. 00:32:54.804 [2024-11-29 13:16:57.381580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.381609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.381983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.382013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.382374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.382404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.382677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.382709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.383092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.383121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.383513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.383544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.383899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.383928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.384280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.384311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.384677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.384706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.385065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.385094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.385386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.385416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.385793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.385823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.386188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.386217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.386595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.386624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.386983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.387013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.387367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.387397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.387761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.387790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.388169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.388201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.388638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.388667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.389029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.389059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.389398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.389429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.389792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.389821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.390069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.390098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.390470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.390500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.390863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.390892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.391253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.391282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.391660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.391688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.392054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.392090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.805 qpair failed and we were unable to recover it. 00:32:54.805 [2024-11-29 13:16:57.392422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.805 [2024-11-29 13:16:57.392452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.392816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.392846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.393209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.393239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.393395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.393425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.393779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.393809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.394181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.394212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.394609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.394638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.395001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.395029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.395401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.395430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.395790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.395818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.396186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.396216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.396578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.396607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.396967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.396996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.397340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.397371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.397747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.397776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.398129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.398168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.398516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.398545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.398903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.398932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.399155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.399196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.399583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.399613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.399974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.400004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.400387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.400426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.400778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.400807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.401176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.401207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.401550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.401581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.401961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.401991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.402358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.402389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.402749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.402777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.403146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.403184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.403533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.403562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.806 qpair failed and we were unable to recover it. 00:32:54.806 [2024-11-29 13:16:57.403831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.806 [2024-11-29 13:16:57.403860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.404213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.404242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.404616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.404646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.404908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.404937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.405288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.405319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.405693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.405721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.406081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.406110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.406487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.406518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.406860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.406890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.407248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.407285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.407639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.407668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.408035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.408063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.408318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.408348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.408696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.408726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.409091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.409120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.409484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.409514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.409868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.409896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.410317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.410347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.410715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.410744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.411099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.411127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.411497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.411528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.411862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.411891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.412255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.412286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.412652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.807 [2024-11-29 13:16:57.412681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.807 qpair failed and we were unable to recover it. 00:32:54.807 [2024-11-29 13:16:57.413059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.413088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.413453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.413484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.413849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.413879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.414143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.414182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.414537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.414567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.414923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.414953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.415324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.415354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.415694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.415723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.416028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.416058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.416410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.416441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.416799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.416828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.417178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.417209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.417577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.417606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.417974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.418004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.418333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.418362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.418725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.418754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.419110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.419139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.419442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.419471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.419830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.419860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.420218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.420249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.420597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.420625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.420987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.421015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.421370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.421400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.421772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.421801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.422173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.422203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.422551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.422591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.422954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.422983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.423342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.423373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.423727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.808 [2024-11-29 13:16:57.423755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.808 qpair failed and we were unable to recover it. 00:32:54.808 [2024-11-29 13:16:57.424147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.424207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.424587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.424617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.424979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.425007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.425376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.425407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.425772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.425801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.426053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.426081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.426322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.426353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.426734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.426763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.427126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.427155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.427514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.427543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.427888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.427917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.428286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.428316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.428550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.428582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.428985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.429014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.429388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.429420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.429863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.429892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.430249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.430279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.430641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.430670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.431030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.431059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.431430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.431461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.431833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.431864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.432222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.432252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.432506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.432535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.432893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.432923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.433182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.433214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.433574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.433603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.434037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.434066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.434414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.434446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.434806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.434835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.435171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.435201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.809 [2024-11-29 13:16:57.435544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.809 [2024-11-29 13:16:57.435573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.809 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.435932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.435960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.436302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.436333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.436692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.436721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.437075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.437106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.437490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.437521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.437785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.437821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.438178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.438209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.438545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.438574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.438939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.438969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.439334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.439365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.439729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.439758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.440099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.440128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.440559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.440589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.440945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.440975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.441333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.441363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.441739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.441768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.442135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.442174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.442516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.442546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.442893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.442921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.443218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.443249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.443647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.443676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.443910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.443942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.444283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.444314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.444659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.444689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.445056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.445086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.445453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.445484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.445725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.445756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.446137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.446177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.446577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.446607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.446954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.446983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.810 [2024-11-29 13:16:57.447330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-11-29 13:16:57.447361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.810 qpair failed and we were unable to recover it. 00:32:54.811 [2024-11-29 13:16:57.447722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-11-29 13:16:57.447751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.811 qpair failed and we were unable to recover it. 00:32:54.811 [2024-11-29 13:16:57.448121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-11-29 13:16:57.448152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.811 qpair failed and we were unable to recover it. 00:32:54.811 [2024-11-29 13:16:57.448523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-11-29 13:16:57.448553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.811 qpair failed and we were unable to recover it. 00:32:54.811 [2024-11-29 13:16:57.448788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-11-29 13:16:57.448820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.811 qpair failed and we were unable to recover it. 00:32:54.811 [2024-11-29 13:16:57.449180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-11-29 13:16:57.449211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.811 qpair failed and we were unable to recover it. 00:32:54.811 [2024-11-29 13:16:57.449594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-11-29 13:16:57.449624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.811 qpair failed and we were unable to recover it. 00:32:54.811 [2024-11-29 13:16:57.449992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-11-29 13:16:57.450020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.811 qpair failed and we were unable to recover it. 00:32:54.811 [2024-11-29 13:16:57.450410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-11-29 13:16:57.450447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:54.811 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.450804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.450837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.451195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.451226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.451590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.451619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.452013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.452042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.452410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.452440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.452791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.452820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.453174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.453212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.453477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.453508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.453883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.453913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.454283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.454314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.454682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.454711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.455054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.455083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.455452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.455483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.455839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.455867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.456243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.456274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.456616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.456646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.457000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.457029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.457375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.457405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.087 [2024-11-29 13:16:57.457764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.087 [2024-11-29 13:16:57.457794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.087 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.458169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.458199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.458546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.458577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.458827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.458856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.459216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.459246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.459622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.459651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.459956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.459985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.460349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.460379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.460647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.460676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.460930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.460962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.461316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.461346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.461752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.461782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.462024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.462055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.462425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.462456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.462830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.462859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.463236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.463266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.463641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.463671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.464030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.464060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.464443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.464473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.464718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.464746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.465099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.465128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.465414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.465444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.465787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.465817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.466175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.466206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.466565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.466594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.466885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.466915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.467283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.467313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.467549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.467580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.467973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.468009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.468364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.088 [2024-11-29 13:16:57.468395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.088 qpair failed and we were unable to recover it. 00:32:55.088 [2024-11-29 13:16:57.468574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.468605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.469014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.469044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.469382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.469413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.469784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.469813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.470178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.470209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.470574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.470603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.470960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.470988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.471334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.471367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.471793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.471822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.472184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.472215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.472573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.472602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.472954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.472983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.473322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.473354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.473746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.473775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.474133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.474174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.474547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.474575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.474827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.474858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.475237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.475269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.475637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.475666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.476014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.476043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.476343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.476378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.476738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.476769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.477280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.477391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.477843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.477880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.478272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.478307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.478573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.478610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.478960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.478990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.479332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.479362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.479746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.089 [2024-11-29 13:16:57.479775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.089 qpair failed and we were unable to recover it. 00:32:55.089 [2024-11-29 13:16:57.480132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.480185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.480546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.480575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.480927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.480955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.481319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.481350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.481727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.481756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.482123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.482152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.482431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.482460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.482831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.482859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.483175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.483206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.483569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.483605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.483868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.483897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.484346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.484376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.484719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.484749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.485126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.485156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.485535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.485564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.485921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.485952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.486336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.486366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.486736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.486765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.487130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.487170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.487546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.487575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.487950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.487978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.488328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.488359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.488708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.488737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.489102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.489130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.489505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.489534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.489808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.489836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.490232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.490262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.490637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.490666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.491025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.491054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.090 qpair failed and we were unable to recover it. 00:32:55.090 [2024-11-29 13:16:57.491448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.090 [2024-11-29 13:16:57.491479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.491826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.491856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.492198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.492228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.492632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.492661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.492898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.492930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.493180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.493210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.493585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.493615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.493985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.494015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.494304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.494335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.494690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.494720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.495153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.495196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.495551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.495580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.495942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.495971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.496339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.496371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.496716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.496747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.497116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.497146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.497536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.497567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.497935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.497963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.498301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.498334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.498705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.498735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.498986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.499014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.499384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.499415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.499667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.499698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.499963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.499992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.500235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.500266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.500384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.500414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.500759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.500789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.501184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.501215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.091 [2024-11-29 13:16:57.501453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.091 [2024-11-29 13:16:57.501482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.091 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.501837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.501866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.502122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.502151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.502532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.502562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.502946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.502975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.503201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.503231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.503582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.503613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.503979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.504010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.504386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.504417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.504786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.504815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.505172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.505204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.505496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.505525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.505865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.505896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.506271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.506303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.506635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.506665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.507044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.507074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.507406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.507438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.507668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.507697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.508130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.508172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.508544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.508580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.508809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.508841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.509099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.509129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.509497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.509528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.509889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.509919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.510284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.510316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.510686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.510716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.511090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.511119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.511500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.511530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.511889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.511919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.512290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.092 [2024-11-29 13:16:57.512320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.092 qpair failed and we were unable to recover it. 00:32:55.092 [2024-11-29 13:16:57.512674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.512705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.513062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.513092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.513344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.513375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.513715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.513747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.514116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.514146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.514522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.514552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.514921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.514951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.515196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.515231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.515609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.515639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.515997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.516026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.516421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.516452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.516815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.516845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.517229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.517259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.517491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.517520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.517943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.517972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.518331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.518363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.518780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.518810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.519064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.519095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.519464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.519496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.519857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.519886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.520149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.520189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.520540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.520569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.520904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.093 [2024-11-29 13:16:57.520934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.093 qpair failed and we were unable to recover it. 00:32:55.093 [2024-11-29 13:16:57.521299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.521329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.521734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.521764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.522137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.522182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.522590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.522619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.522968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.522997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.523340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.523371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.523623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.523659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.524006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.524036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.524384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.524415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.524764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.524793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.525152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.525208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.525448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.525481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.525844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.525873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.526100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.526130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.526527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.526557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.526923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.526953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.527325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.527355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.527575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.527607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.527967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.527996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.528386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.528416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.528642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.528671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.529056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.529086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.529451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.529482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.529746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.529775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.530169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.530200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.530570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.530599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.531000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.531029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.531409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.531440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.531809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.531839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.094 [2024-11-29 13:16:57.532292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.094 [2024-11-29 13:16:57.532322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.094 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.532462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.532490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.532736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.532765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.533128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.533168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.533516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.533546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.533922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.533952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.534320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.534350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.534707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.534736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.535108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.535137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.535513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.535544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.535886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.535916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.536280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.536311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.536618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.536646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.536766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.536795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.537180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.537210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.537578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.537607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.538007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.538037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.538398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.538433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.538811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.538840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.539208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.539238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.539484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.539514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.539744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.539776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.540133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.540176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.540569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.540599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.540958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.540987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.541336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.541367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.541712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.541742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.542118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.542147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.542531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.542561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.542902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.095 [2024-11-29 13:16:57.542932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.095 qpair failed and we were unable to recover it. 00:32:55.095 [2024-11-29 13:16:57.543357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.543388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.543772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.543801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.544215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.544244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.544603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.544632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.545002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.545031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.545399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.545431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.545838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.545867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.546236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.546268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.546657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.546686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.547054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.547084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.547423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.547453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.547706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.547737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.548130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.548170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.548520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.548550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.548979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.549009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.549386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.549418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.549777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.549806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.550198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.550229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.550597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.550628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.550996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.551026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.551394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.551423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.551794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.551824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.552200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.552232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.552584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.552613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.552973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.553003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.553291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.553322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.553702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.553731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.554004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.554039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.554382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.554413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.554789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.554818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.096 [2024-11-29 13:16:57.555177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.096 [2024-11-29 13:16:57.555208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.096 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.555452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.555483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.555859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.555888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.556238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.556269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.556708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.556737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.557140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.557181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.557520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.557550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.557913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.557942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.558283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.558315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.558710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.558738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.559107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.559137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.559525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.559555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.559692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.559722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.560099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.560129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.560462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.560493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.560860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.560889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.561235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.561265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.561626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.561656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.562022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.562050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.562415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.562446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.562796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.562825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.563203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.563233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.563623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.563651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.564003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.564032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.564387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.564418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.564782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.564810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.565178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.565210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.565573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.565601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.097 [2024-11-29 13:16:57.565955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.097 [2024-11-29 13:16:57.565983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.097 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.566350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.566380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.566798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.566827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.567256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.567285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.567640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.567670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.568044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.568073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.568336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.568367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.568708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.568737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.569101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.569132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.569490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.569525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.569881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.569911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.570287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.570318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.570672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.570701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.571066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.571094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.571430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.571460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.571821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.571849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.572099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.572130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.572534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.572565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.572924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.572952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.573318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.573348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.573684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.573713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.574077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.574107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.574547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.574578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.574916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.574945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.575306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.575337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.575700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.575729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.576091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.576119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.576500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.576530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.576880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.576910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.577282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.577311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.098 [2024-11-29 13:16:57.577660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.098 [2024-11-29 13:16:57.577689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.098 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.578044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.578074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.578418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.578448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.578822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.578851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.579202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.579233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.579600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.579628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.579999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.580028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.580402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.580432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.580804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.580832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.581207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.581237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.581493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.581525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.581893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.581923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.582261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.582291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.582652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.582681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.583026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.583055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.583396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.583429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.583776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.583805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.584180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.584210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.584554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.584583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.584918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.584953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.585292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.585323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.585685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.585714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.586075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.586104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.586472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.586502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.586878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.586906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.099 qpair failed and we were unable to recover it. 00:32:55.099 [2024-11-29 13:16:57.587272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.099 [2024-11-29 13:16:57.587303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.587656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.587686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.588048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.588078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.588418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.588448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.588808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.588838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.589196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.589226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.589465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.589496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.589858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.589887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.590257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.590287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.590646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.590675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.591020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.591049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.591390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.591422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.591776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.591805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.592173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.592203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.592565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.592595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.592976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.593004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.593339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.593369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.593727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.593756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.594129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.594177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.594456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.594485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.594836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.594865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.595238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.595269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.595637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.595667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.595905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.595938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.596194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.596224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.596590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.596619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.597032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.597062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.597395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.597425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.597856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.100 [2024-11-29 13:16:57.597886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.100 qpair failed and we were unable to recover it. 00:32:55.100 [2024-11-29 13:16:57.598230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.598261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.598603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.598632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.598986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.599014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.599372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.599403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.599759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.599788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.600168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.600204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.600535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.600567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.600946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.600975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.601318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.601348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.601565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.601597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.601961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.601990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.602344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.602376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.602753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.602782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.603155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.603198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.603542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.603571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.603822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.603850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.604192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.604222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.604628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.604658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.605011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.605040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.605415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.605445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.605823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.605851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.606213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.606243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.606615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.606644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.607008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.607037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.607421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.607451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.607811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.607840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.608208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.608238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.608589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.608620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.608867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.608896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.101 qpair failed and we were unable to recover it. 00:32:55.101 [2024-11-29 13:16:57.609236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.101 [2024-11-29 13:16:57.609266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.609625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.609654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.610018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.610047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.610406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.610437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.610797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.610827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.611190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.611221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.611593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.611622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.611994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.612023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.612388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.612419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.612781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.612810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.613178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.613209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.613573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.613603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.613939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.613968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.614400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.614431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.614789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.614819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.615179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.615211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.615605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.615652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.615991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.616022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.616300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.616330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.616676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.616706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.617143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.617183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.617563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.617592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.617960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.617990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.618358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.618389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.618752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.618781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.619142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.619183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.619557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.619585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.619957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.619985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.620227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.620260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.620618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.102 [2024-11-29 13:16:57.620648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.102 qpair failed and we were unable to recover it. 00:32:55.102 [2024-11-29 13:16:57.621011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.621041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.621297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.621326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.621684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.621713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.622077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.622107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.622536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.622567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.622918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.622948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.623236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.623265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.623629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.623659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.624029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.624059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.624334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.624364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.624722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.624751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.625149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.625206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.625597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.625626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.625974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.626005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.626416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.626446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.626806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.626835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.627199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.627240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.627561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.627590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.627932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.627961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.628350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.628381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.628738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.628766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.629131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.629172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.629554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.629584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.629946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.629975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.630339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.630369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.630717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.630747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.631110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.631145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.631522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.631554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.631915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.103 [2024-11-29 13:16:57.631945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.103 qpair failed and we were unable to recover it. 00:32:55.103 [2024-11-29 13:16:57.632327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.632357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.632723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.632752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.633117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.633147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.633402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.633434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.633823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.633852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.634218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.634249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.634609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.634638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.635017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.635046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.635415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.635445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.635800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.635829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.636192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.636223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.636460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.636491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.636850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.636879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.637128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.637156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.637507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.637536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.637913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.637942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.638297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.638337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.638694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.638723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.639080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.639109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.639450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.639479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.639711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.639743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.640078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.640108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.640442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.640473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.640833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.640862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.641225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.641256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.641610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.641639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.641999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.642028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.642379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.642409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.642749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.642778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.643138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.104 [2024-11-29 13:16:57.643177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.104 qpair failed and we were unable to recover it. 00:32:55.104 [2024-11-29 13:16:57.643584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.643613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.643966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.643996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.644338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.644369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.644724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.644752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.645101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.645130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.645544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.645574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.645931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.645959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.646199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.646240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.646641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.646671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.647033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.647062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.647403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.647434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.647788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.647818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.648180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.648209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.648581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.648611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.648965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.648994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.649298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.649328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.649681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.649710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.650074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.650103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.650471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.650501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.650864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.650893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.651258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.651288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.651652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.651681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.652066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.652095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.652473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.652503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.652866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.652895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.653244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.653274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.105 [2024-11-29 13:16:57.653631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.105 [2024-11-29 13:16:57.653661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.105 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.654013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.654042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.654387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.654417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.654672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.654703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.655096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.655126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.655556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.655586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.656007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.656036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.656389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.656419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.656823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.656853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.657213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.657243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.657501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.657531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.657897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.657926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.658291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.658322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.658680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.658708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.659083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.659112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.659495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.659526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.659949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.659978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.660330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.660363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.660723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.660752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.661118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.661148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.661451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.661480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.661859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.661895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.662250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.662281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.662583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.662612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.662962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.662991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.663242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.663272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.663657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.663686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.663939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.663967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.664317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.664348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.664708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.664737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.665101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.106 [2024-11-29 13:16:57.665128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.106 qpair failed and we were unable to recover it. 00:32:55.106 [2024-11-29 13:16:57.665471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.665502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.665803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.665831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.666203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.666233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.666569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.666598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.666960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.666988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.667364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.667394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.667753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.667782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.668173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.668203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.668566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.668595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.668952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.668982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.669201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.669234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.669629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.669658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.670003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.670034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.670391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.670422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.670778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.670807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.671180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.671210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.671544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.671573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.671941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.671971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.672415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.672444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.672810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.672840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.673210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.673271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.673640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.673668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.674033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.674061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.674399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.674429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.674790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.674818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.675183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.675213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.675564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.675594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.675956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.675985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.676348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.107 [2024-11-29 13:16:57.676378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.107 qpair failed and we were unable to recover it. 00:32:55.107 [2024-11-29 13:16:57.676748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.676776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.677145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.677193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.677555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.677583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.677839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.677868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.678247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.678279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.678634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.678665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.679026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.679055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.679315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.679345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.679706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.679736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.680108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.680137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.680507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.680537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.680901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.680931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.681298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.681328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.681702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.681731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.682079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.682110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.682513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.682544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.682852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.682881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.683256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.683286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.683687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.683716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.684056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.684086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.684458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.684488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.684720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.684752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.685006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.685036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.685402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.685431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.685793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.685822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.686178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.686208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.686564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.686593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.687010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.687038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.687366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.687398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.108 qpair failed and we were unable to recover it. 00:32:55.108 [2024-11-29 13:16:57.687755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.108 [2024-11-29 13:16:57.687783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.688157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.688197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.688554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.688583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.688945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.688973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.689341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.689375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.689761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.689791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.690147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.690193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.690557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.690594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.690928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.690957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.691320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.691351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.691720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.691749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.692123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.692151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.692499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.692528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.692976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.693005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.693360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.693389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.693762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.693791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.694152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.694203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.694562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.694591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.695025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.695054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.695440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.695470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.695815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.695845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.696207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.696237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.696618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.696647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.697012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.697041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.697395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.697426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.697796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.697824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.698194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.698226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.698597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.698628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.698999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.699031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.109 [2024-11-29 13:16:57.699384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.109 [2024-11-29 13:16:57.699415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.109 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.699779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.699808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.700176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.700206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.700641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.700670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.700993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.701022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.701400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.701430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.701808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.701837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.702204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.702235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.702602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.702632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.702995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.703025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.703260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.703300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.703702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.703731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.703990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.704018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.704380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.704410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.704779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.704807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.705178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.705207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.705559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.705588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.705796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.705827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.706196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.706226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.706571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.706601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.706949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.706978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.707231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.707265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.707628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.707658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.708005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.708034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.110 [2024-11-29 13:16:57.708410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.110 [2024-11-29 13:16:57.708439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.110 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.708775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.708804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.709172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.709203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.709555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.709585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.709954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.709982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.710342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.710372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.710728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.710758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.711007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.711040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.711361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.711394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.711734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.711764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.712003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.712035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.712419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.712449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.712797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.712827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.713180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.713210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.713586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.713614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.713968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.713997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.714354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.714384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.714748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.714777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.715137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.715178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.715330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.715361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.715710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.715738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.716099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.716127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.716496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.716526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.716886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.716914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.717265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.717296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.717641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.717671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.718033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.718069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.718319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.718349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.718724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.718754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.719116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.719146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.111 [2024-11-29 13:16:57.719523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.111 [2024-11-29 13:16:57.719553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.111 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.719783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.719814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.720178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.720210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.720566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.720594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.720952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.720980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.721341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.721371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.721734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.721762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.722126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.722155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.722523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.722552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.722922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.722951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.723313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.723344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.723711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.723739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.724083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.724111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.724362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.724392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.724742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.724771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.725129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.725157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.725557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.725587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.725945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.725973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.726384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.726416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.726764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.726794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.727238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.727269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.727620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.727650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.728015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.728045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.728388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.728418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.728755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.728785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.729156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.729195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.729566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.729595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.729943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.729972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.730339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.730372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.730612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.112 [2024-11-29 13:16:57.730642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.112 qpair failed and we were unable to recover it. 00:32:55.112 [2024-11-29 13:16:57.730820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.730849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.731218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.731248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.731563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.731592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.731944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.731972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.732403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.732433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.732723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.732753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.733111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.733151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.733524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.733554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.733923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.733952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.734342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.734373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.734741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.734770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.735122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.735151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.735540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.735570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.735932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.735960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.736325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.736357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.736733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.736762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.737106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.737135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.737566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.737595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.738008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.738037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.738405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.738437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.738809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.738839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.739197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.739235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.739594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.739623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.739975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.740005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.740381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.740411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.740792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.740821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.741060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.741092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.741433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.741463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.741828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.741857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.742234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.742265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.742628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.742658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.742932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.113 [2024-11-29 13:16:57.742960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.113 qpair failed and we were unable to recover it. 00:32:55.113 [2024-11-29 13:16:57.743321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.743351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.743715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.743744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.744109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.744137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.744527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.744556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.744917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.744946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.745327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.745357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.745750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.745778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.746214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.746247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.746598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.746628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.747007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.747037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.747386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.747416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.747777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.747806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.748180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.748211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.748626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.748654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.748987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.749022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.749391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.749421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.749765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.749794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.750157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.750199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.750569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.750599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.751001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.751029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.114 [2024-11-29 13:16:57.751426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.114 [2024-11-29 13:16:57.751457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.114 qpair failed and we were unable to recover it. 00:32:55.389 [2024-11-29 13:16:57.751801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.389 [2024-11-29 13:16:57.751831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.389 qpair failed and we were unable to recover it. 00:32:55.389 [2024-11-29 13:16:57.752234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.389 [2024-11-29 13:16:57.752265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.389 qpair failed and we were unable to recover it. 00:32:55.389 [2024-11-29 13:16:57.752514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.389 [2024-11-29 13:16:57.752546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.389 qpair failed and we were unable to recover it. 00:32:55.389 [2024-11-29 13:16:57.752909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.389 [2024-11-29 13:16:57.752938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.389 qpair failed and we were unable to recover it. 00:32:55.389 [2024-11-29 13:16:57.753224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.389 [2024-11-29 13:16:57.753254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.389 qpair failed and we were unable to recover it. 00:32:55.389 [2024-11-29 13:16:57.753596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.389 [2024-11-29 13:16:57.753624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.389 qpair failed and we were unable to recover it. 00:32:55.389 [2024-11-29 13:16:57.753974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.389 [2024-11-29 13:16:57.754004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.389 qpair failed and we were unable to recover it. 00:32:55.389 [2024-11-29 13:16:57.754372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.389 [2024-11-29 13:16:57.754403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.389 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.754772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.754801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.755178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.755209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.755546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.755575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.755920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.755950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.756313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.756345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.756681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.756710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.757116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.757144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.757495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.757525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.757892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.757922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.758289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.758318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.758579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.758612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.758888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.758918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.759293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.759324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.759678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.759708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.760100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.760128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.760495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.760526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.760884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.760914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.761181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.761211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.761596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.761625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.761996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.762024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.762410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.762442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.762815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.762845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.763217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.763248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.763620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.763649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.764002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.764030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.764406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.764442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.764811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.764841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.765201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.765231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.765610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.390 [2024-11-29 13:16:57.765640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.390 qpair failed and we were unable to recover it. 00:32:55.390 [2024-11-29 13:16:57.766020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.766049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.766362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.766392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.766777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.766806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.767256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.767286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.767636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.767666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.768037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.768075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.768421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.768452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.768674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.768703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.768860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.768888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.769252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.769282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.769626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.769656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.770027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.770055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.770475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.770506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.770921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.770951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.771317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.771346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.771674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.771703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.772072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.772101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.772475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.772506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.772753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.772786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.773141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.773186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.773532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.773561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.773876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.773906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.774276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.774307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.774574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.774603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.774954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.774982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.775326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.775357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.775705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.775734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.775981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.776009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.391 [2024-11-29 13:16:57.776384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.391 [2024-11-29 13:16:57.776414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.391 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.776622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.776654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.777025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.777055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.777330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.777361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.777713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.777743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.777995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.778023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.778365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.778396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.778765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.778794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.779172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.779209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.779458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.779488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.779858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.779888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.780111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.780139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.780503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.780533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.780885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.780915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.781275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.781305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.781527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.781556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.781886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.781914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.782271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.782302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.782669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.782698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.783098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.783127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.783571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.783601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.783841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.783869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.784234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.784266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.784643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.784672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.785043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.785071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.785471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.785501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.785874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.785903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.786152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.786195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.786483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.786512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.786929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.392 [2024-11-29 13:16:57.786958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.392 qpair failed and we were unable to recover it. 00:32:55.392 [2024-11-29 13:16:57.787186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.787216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.787598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.787626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.788013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.788043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.788409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.788439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.788787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.788816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.789198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.789229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.789537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.789565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.789943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.789972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.790379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.790409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.790785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.790814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.791257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.791289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.791506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.791536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.791945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.791974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.792342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.792371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.792729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.792758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.793122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.793151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.793528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.793557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.793673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.793700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.794032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.794066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.794429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.794460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.794852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.794882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.795321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.795352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.795676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.795704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.796061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.796090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.796321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.796351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.796734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.796763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.797028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.797056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.797328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.797360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.797726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.797756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.798098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.798127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.798502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.798532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.393 qpair failed and we were unable to recover it. 00:32:55.393 [2024-11-29 13:16:57.798794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.393 [2024-11-29 13:16:57.798822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.799171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.799202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.799460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.799492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.799834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.799864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.800247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.800279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.800693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.800724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.801078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.801108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.801457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.801488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.801748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.801779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.802146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.802186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.802557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.802587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.802937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.802967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.803327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.803357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.803761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.803790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.804142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.804196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.804535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.804565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.804929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.804957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.805324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.805356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.805726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.805755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.806115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.806144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.806548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.806577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.806935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.806965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.807250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.807280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.807632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.807662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.807901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.807934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.808274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.808305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.808682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.808711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.809080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.809121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.809551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.809582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.809938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.809974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.394 [2024-11-29 13:16:57.810331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.394 [2024-11-29 13:16:57.810362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.394 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.810710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.810741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.811109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.811138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.811492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.811523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.811800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.811831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.812179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.812210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.812552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.812582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.812818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.812853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.813215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.813248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.813405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.813440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.813677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.813707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.814073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.814103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.814490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.814522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.814878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.814908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.815275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.815306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.815534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.815565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.815939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.815969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.816336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.816367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.816793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.816824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.817152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.817194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.817568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.817599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.817952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.817983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.818338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.818368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.818730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.395 [2024-11-29 13:16:57.818760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.395 qpair failed and we were unable to recover it. 00:32:55.395 [2024-11-29 13:16:57.818987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.819020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.819385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.819418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.819774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.819803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.820168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.820198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.820496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.820525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.820883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.820912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.821271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.821302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.821682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.821710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.822024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.822054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.822305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.822334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.822702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.822730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.822988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.823020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.823386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.823417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.823756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.823793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.824178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.824209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.824615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.824643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.825075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.825104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.825481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.825512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.825888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.825917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.826292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.826322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.826690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.826719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.827082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.827110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.827502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.827533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.827902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.827931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.828204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.828234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.828614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.828642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.829003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.829032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.829404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.829436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.829784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.829813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.830179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.396 [2024-11-29 13:16:57.830209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.396 qpair failed and we were unable to recover it. 00:32:55.396 [2024-11-29 13:16:57.830625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.830654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.831015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.831043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.831414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.831445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.831804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.831833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.832188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.832220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.832469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.832497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.832851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.832879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.833261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.833291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.833644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.833674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.834029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.834059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.834306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.834337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.834682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.834711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.835073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.835103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.835476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.835507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.835875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.835903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.836169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.836203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.836448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.836480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.836865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.836893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.837250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.837280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.837537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.837566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.837940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.837969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.838323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.838354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.838733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.838762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.839129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.839187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.839552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.839584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.839831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.839860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.840156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.840199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.840522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.840551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.840914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.840944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.841207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.397 [2024-11-29 13:16:57.841242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.397 qpair failed and we were unable to recover it. 00:32:55.397 [2024-11-29 13:16:57.841629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.841657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.841924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.841953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.842340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.842370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.842698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.842728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.843091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.843119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.843553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.843584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.843940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.843968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.844326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.844364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.844741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.844770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.845133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.845172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.845523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.845552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.845911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.845941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.846201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.846231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.846488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.846517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.846890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.846919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.847287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.847317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.847669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.847698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.848042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.848072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.848456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.848486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.848742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.848770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.849154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.849196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.849591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.849620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.849978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.850008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.850371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.850402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.850728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.850757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.851124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.851153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.851564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.851593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.851952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.851980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.398 qpair failed and we were unable to recover it. 00:32:55.398 [2024-11-29 13:16:57.852370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.398 [2024-11-29 13:16:57.852399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.852643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.852672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.853016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.853045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.853385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.853416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.853772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.853801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.854176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.854212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.854564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.854592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.854945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.854975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.855335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.855366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.855718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.855748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.856100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.856129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.856515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.856545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.856782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.856810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.857190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.857221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.857582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.857611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.857987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.858016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.858365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.858395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.858753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.858782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.859030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.859058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.859447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.859477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.859852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.859880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.860240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.860269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.860535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.860567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.860906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.860937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.861303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.861333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.861689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.861717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.862078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.862107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.862474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.862504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.862865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.862894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.863251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.399 [2024-11-29 13:16:57.863283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.399 qpair failed and we were unable to recover it. 00:32:55.399 [2024-11-29 13:16:57.863647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.863676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.864038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.864067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.864426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.864457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.864819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.864849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.865206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.865236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.865660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.865689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.866052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.866081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.866426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.866456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.866817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.866845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.867200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.867231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.867583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.867612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.867872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.867900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.868278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.868307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.868683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.868712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.868960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.868988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.869346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.869383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.869611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.869642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.869999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.870029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.870379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.870410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.870771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.870800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.871174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.871205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.871558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.871588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.871822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.871853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.872278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.872309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.872684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.872713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.873060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.873089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.873454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.873484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.873844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.873872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.874232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.874262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.400 qpair failed and we were unable to recover it. 00:32:55.400 [2024-11-29 13:16:57.874643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.400 [2024-11-29 13:16:57.874672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.875030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.875060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.875412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.875442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.875822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.875851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.876288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.876319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.876546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.876578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.876835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.876864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.877235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.877267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.877622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.877651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.878010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.878040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.878315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.878345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.878694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.878723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.878972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.879001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.879387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.879417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.879650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.879682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.880034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.880063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.880412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.880443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.880809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.880838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.881196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.881249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.881612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.881642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.882002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.882032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.882398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.882428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.882864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.882893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.883259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.401 [2024-11-29 13:16:57.883289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.401 qpair failed and we were unable to recover it. 00:32:55.401 [2024-11-29 13:16:57.883632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.883661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.883912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.883944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.884205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.884235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.884659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.884687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.885043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.885071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.885434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.885464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.885823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.885853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.886221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.886251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.886591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.886620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.886984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.887013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.887365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.887396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.887758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.887787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.888179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.888209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.888573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.888602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.888860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.888889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.889274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.889306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.889680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.889710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.890062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.890091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.890435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.890466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.890826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.890855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.891268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.891298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.891662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.891691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.892048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.892076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.892423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.892453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.892884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.892912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.893281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.893311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.893668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.893697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.893936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.893968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.894317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.894348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.402 [2024-11-29 13:16:57.894710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.402 [2024-11-29 13:16:57.894746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.402 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.895047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.895077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.895416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.895448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.895800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.895830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.896180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.896209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.896570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.896599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.896957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.896986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.897353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.897385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.897744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.897773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.898137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.898175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.898582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.898611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.898980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.899011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.899391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.899421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.899779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.899808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.900177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.900208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.900380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.900410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.900749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.900778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.901154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.901207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.901582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.901611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.901864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.901893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.902257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.902288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.902653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.902682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.902943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.902972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.903351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.903380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.903746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.903776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.904152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.904193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.904553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.904582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.904955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.904984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.905326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.905355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.403 [2024-11-29 13:16:57.905710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.403 [2024-11-29 13:16:57.905739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.403 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.906099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.906128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.906502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.906532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.906780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.906809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.907145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.907187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.907522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.907550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.907917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.907946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.908311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.908341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.908785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.908814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.909179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.909208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.909551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.909581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.909892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.909928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.910292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.910323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.910744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.910775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.911118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.911148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.911533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.911563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.911940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.911969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.912332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.912362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.912736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.912766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.913122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.913152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.913541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.913572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.913912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.913941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.914303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.914333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.914597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.914627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.915002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.915030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.915386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.915420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.915792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.915821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.916183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.916215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.916475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.916504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.404 [2024-11-29 13:16:57.916897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.404 [2024-11-29 13:16:57.916926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.404 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.917302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.917332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.917680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.917709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.918065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.918094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.918453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.918484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.918850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.918878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.919221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.919250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.919638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.919667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.920016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.920047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.920293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.920326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.920696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.920725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.921080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.921109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.921520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.921550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.921909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.921938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.922346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.922376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.922609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.922637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.923006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.923035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.923335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.923365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.923665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.923694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.924094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.924123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.924364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.924397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.924746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.924777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.925144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.925220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.925576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.925608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.925980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.926009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.926367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.926400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.926762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.926791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.927100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.927130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.927503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.927532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.927899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.405 [2024-11-29 13:16:57.927927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.405 qpair failed and we were unable to recover it. 00:32:55.405 [2024-11-29 13:16:57.928290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.928320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.928664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.928693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.929051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.929080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.929446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.929475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.929836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.929867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.930237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.930269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.930620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.930650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.931012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.931043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.931418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.931449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.931843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.931874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.932243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.932275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.932623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.932652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.933012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.933042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.933326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.933358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.933751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.933780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.934156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.934203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.934443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.934472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.934859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.934888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.935310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.935341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.935720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.935750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.935986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.936017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.936402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.936434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.936790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.936820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.939480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.939546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.939936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.939970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.940322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.940356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.940708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.940738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.941100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.941130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.941504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.941534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.941874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.406 [2024-11-29 13:16:57.941903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.406 qpair failed and we were unable to recover it. 00:32:55.406 [2024-11-29 13:16:57.942277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.942309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.942673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.942703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.943062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.943100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.943478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.943509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.943850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.943880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.944138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.944179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.944550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.944580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.944963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.944992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.945343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.945375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.945763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.945793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.946155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.946197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.946447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.946478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.946841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.946872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.947222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.947253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.947613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.947641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.948007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.948037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.948412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.948444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.948804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.948833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.949240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.949271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.949654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.949684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.950050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.950078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.950486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.407 [2024-11-29 13:16:57.950516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.407 qpair failed and we were unable to recover it. 00:32:55.407 [2024-11-29 13:16:57.950882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.950911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.951286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.951317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.951736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.951765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.952127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.952157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.952551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.952583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.952953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.952991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.953337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.953368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.953649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.953678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.954030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.954059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.954312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.954342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.954689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.954718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.955082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.955111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.955402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.955432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.955677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.955709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.956061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.956091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.956457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.956488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.956855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.956884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.957247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.957278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.957640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.957669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.958035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.958065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.958423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.958461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.958697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.958726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.959089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.959118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.959471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.959501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.959861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.959891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.960262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.960293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.960673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.960702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.961061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.961090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.961484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.408 [2024-11-29 13:16:57.961514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.408 qpair failed and we were unable to recover it. 00:32:55.408 [2024-11-29 13:16:57.961861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.961891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.962238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.962269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.962535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.962566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.962915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.962944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.963208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.963239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.963605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.963634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.963852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.963881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.964241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.964272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.964675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.964703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.965052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.965082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.965415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.965446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.965683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.965712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.966151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.966191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.966566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.966595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.966952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.966981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.967337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.967368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.967731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.967759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.968123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.968152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.968598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.968629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.968981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.969011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.969355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.969387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.969727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.969758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.970098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.970127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.970495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.970525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.970889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.970918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.971279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.971310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.971556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.971589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1115709 Killed "${NVMF_APP[@]}" "$@" 00:32:55.409 [2024-11-29 13:16:57.971984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.972014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 [2024-11-29 13:16:57.972426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.972457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.409 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:55.409 [2024-11-29 13:16:57.972825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.409 [2024-11-29 13:16:57.972855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.409 qpair failed and we were unable to recover it. 00:32:55.410 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:55.410 [2024-11-29 13:16:57.973228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.973260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:55.410 [2024-11-29 13:16:57.973638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.410 [2024-11-29 13:16:57.973668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:55.410 [2024-11-29 13:16:57.974045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.974075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.974435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.974466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.974709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.974742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.974989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.975019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.975466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.975496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.975748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.975777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.976130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.976178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.976532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.976562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.976918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.976948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.977112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.977141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.977513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.977544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.977906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.977935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.978322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.978352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.978707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.978737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.979108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.979139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.979451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.979481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.979847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.979877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.980258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.980290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.980671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.980701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.981056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.981086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.981460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.981492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.981793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.981824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 [2024-11-29 13:16:57.982060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.982094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1116719 00:32:55.410 [2024-11-29 13:16:57.982431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.410 [2024-11-29 13:16:57.982464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.410 qpair failed and we were unable to recover it. 00:32:55.410 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1116719 00:32:55.410 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:55.411 [2024-11-29 13:16:57.982880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.982915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1116719 ']' 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.411 [2024-11-29 13:16:57.983284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.983315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.411 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.411 [2024-11-29 13:16:57.983724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.983754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.411 13:16:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:55.411 [2024-11-29 13:16:57.984152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.984208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.984449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.984479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.984871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.984903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.985243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.985276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.985685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.985716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.986060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.986092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.986395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.986428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.986798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.986832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.987080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.987114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.987534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.987566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.987957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.987987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.988309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.988341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.988709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.988739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.989096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.989125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.989425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.989456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.989789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.989818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.990196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.990228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.990563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.990593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.990942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.990983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.991344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.991376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.991625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.991657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.992079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.992110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.992570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.411 [2024-11-29 13:16:57.992601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.411 qpair failed and we were unable to recover it. 00:32:55.411 [2024-11-29 13:16:57.992969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.992998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.993353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.993384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.993756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.993785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.994143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.994187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.994566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.994595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.994963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.994993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.995358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.995389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.995640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.995670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.996014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.996045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.996433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.996465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.996822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.996852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.997140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.997184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.997605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.997638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.997892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.997922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.998213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.998244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.998609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.998638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.999084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.999116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.999503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.999539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:57.999917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:57.999946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.000251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.000284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.000680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.000710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.001073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.001102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.001570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.001601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.002017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.002047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.002395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.002428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.002793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.002823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.003187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.003217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.003681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.003711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.003937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.003968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.412 [2024-11-29 13:16:58.004348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.412 [2024-11-29 13:16:58.004379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.412 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.004788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.004818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.005112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.005147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.005643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.005678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.006067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.006104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.006423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.006454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.006712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.006748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.007127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.007169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.007528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.007559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.007911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.007942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.008303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.008336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.008733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.008764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.009119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.009149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.009409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.009441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.009828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.009858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.010235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.010270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.010544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.010575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.010923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.010953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.011334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.011366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.011755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.011784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.012192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.012224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.012585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.012615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.013002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.013032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.013248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.013282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.013682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.013715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.413 qpair failed and we were unable to recover it. 00:32:55.413 [2024-11-29 13:16:58.014088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.413 [2024-11-29 13:16:58.014117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.014576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.014607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.014971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.015001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.015369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.015401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.015655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.015684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.016048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.016077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.016495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.016526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.016883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.016912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.017280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.017312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.017771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.017801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.018193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.018224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.018593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.018622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.018995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.019025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.019392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.019422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.019672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.019704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.020088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.020119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.020536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.020567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.020937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.020966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.021206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.021239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.021632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.021663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.022019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.022049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.022411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.022455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.022711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.022741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.023112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.023142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.023565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.023597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.023973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.024003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.024315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.024346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.024742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.024773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.025232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.025263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.025636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.025666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.025933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.414 [2024-11-29 13:16:58.025963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.414 qpair failed and we were unable to recover it. 00:32:55.414 [2024-11-29 13:16:58.026350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.026381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.026655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.026685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.027066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.027095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.027279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.027310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.027690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.027720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.028113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.028143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.028419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.028450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.028798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.028828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.029205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.029236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.029632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.029661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.030029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.030059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.030462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.030492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.030608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.030638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.030977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.031007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.031402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.031434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.031676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.031706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.032063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.032093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.032533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.032566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.032916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.032947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.033328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.033360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.033639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.033667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.034030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.034059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.034423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.034454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.034847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.034877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.035285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.035315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.035434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.035462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.035810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.035840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.036118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.036147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.036540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.036572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.415 [2024-11-29 13:16:58.036932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.415 [2024-11-29 13:16:58.036961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.415 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.037345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.037382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.037762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.037790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.038184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.038215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.038583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.038612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.038988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.039016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.039389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.039421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.039794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.039824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.040259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.040289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.040660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.040690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.040879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.040909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.041258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.041290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.041690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.041721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.041933] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:32:55.416 [2024-11-29 13:16:58.042011] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.416 [2024-11-29 13:16:58.042116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.042157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.042422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.042452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.042691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.042720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.043190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.043223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.043521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.043552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.043930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.043960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.044347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.044380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.044553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.044584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.044949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.044980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.045330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.045364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.045611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.045646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.046009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.046040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.046400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.046434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.046797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.046829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.047103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.047135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.416 [2024-11-29 13:16:58.047527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.416 [2024-11-29 13:16:58.047559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.416 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.047919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.047950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.048310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.048343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.048717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.048748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.049124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.049155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.049541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.049573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.049826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.049857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.050194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.050227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.050605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.050636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.051009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.051040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.051399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.051430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.051793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.051824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.052216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.052256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.052672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.052703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.053083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.053115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.053379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.053413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.053782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.053813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.054071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.054106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.054477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.054509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.054895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.054926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.055044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.055076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.055247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.055279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.417 [2024-11-29 13:16:58.055543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.417 [2024-11-29 13:16:58.055574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.417 qpair failed and we were unable to recover it. 00:32:55.691 [2024-11-29 13:16:58.055958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.691 [2024-11-29 13:16:58.055991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.691 qpair failed and we were unable to recover it. 00:32:55.691 [2024-11-29 13:16:58.056308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.691 [2024-11-29 13:16:58.056341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.691 qpair failed and we were unable to recover it. 00:32:55.691 [2024-11-29 13:16:58.056711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.056742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.057115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.057147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.057390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.057421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.057768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.057798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.058055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.058089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.058438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.058470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.058851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.058883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.059240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.059271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.059638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.059670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.060040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.060070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.060435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.060466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.060833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.060863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.061218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.061249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.061517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.061546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.061908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.061939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.062302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.062333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.062707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.062738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.063108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.063137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.063531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.063561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.063940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.063969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.064361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.064393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.064777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.064815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.065185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.065216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.065574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.065605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.065971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.066001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.066265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.066296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.066684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.066714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.067082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.692 [2024-11-29 13:16:58.067119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.692 qpair failed and we were unable to recover it. 00:32:55.692 [2024-11-29 13:16:58.067511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.067549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.067890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.067919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.068282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.068313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.068680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.068711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.069067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.069098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.069445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.069477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.069854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.069884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.070146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.070193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.070551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.070581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.070895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.070931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.071285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.071317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.071679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.071709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.072065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.072094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.072451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.072483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.072831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.072862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.073223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.073256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.073664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.073694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.074032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.074063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.074412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.074444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.074810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.074840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.075295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.075327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.075685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.075716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.075979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.076009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.076390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.076421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.076790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.076820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.077200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.077233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.077616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.077650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.077881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.693 [2024-11-29 13:16:58.077909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.693 qpair failed and we were unable to recover it. 00:32:55.693 [2024-11-29 13:16:58.078291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.078321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.078695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.078724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.079094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.079124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.079477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.079507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.079877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.079906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.080124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.080156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.080554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.080584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.080956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.080985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.081361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.081393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.081758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.081788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.082150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.082192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.082566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.082603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.082846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.082875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.083216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.083248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.083677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.083708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.083947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.083977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.084351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.084382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.084763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.084792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.085259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.085291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.085666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.085695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.085965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.085995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.086358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.086388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.086758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.086787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.087020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.087049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.087440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.087470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.087845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.087874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.088239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.088269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.088647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.088676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.089041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.089070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.089416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.089448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.694 qpair failed and we were unable to recover it. 00:32:55.694 [2024-11-29 13:16:58.089796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.694 [2024-11-29 13:16:58.089825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.090259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.090290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.090664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.090694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.091135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.091174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.091531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.091560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.091978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.092007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.092408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.092438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.092693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.092722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.093087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.093117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.093541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.093571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.093955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.093983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.094371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.094402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.094779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.094809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.095090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.095119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.095517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.095549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.095775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.095804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.096088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.096117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.096499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.096530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.096949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.096979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.097375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.097407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.097793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.097822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.098085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.098127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.098498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.098529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.098901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.098931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.099193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.099224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.099582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.099613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.099987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.100016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.100399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.100430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.100649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.100681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.695 [2024-11-29 13:16:58.101043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.695 [2024-11-29 13:16:58.101073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.695 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.101456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.101487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.101846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.101876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.102249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.102279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.102503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.102532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.102905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.102934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.103307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.103340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.103721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.103750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.104011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.104040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.104290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.104321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.104729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.104758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.105137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.105190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.105580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.105609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.105973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.106002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.106252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.106286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.106563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.106592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.106989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.107018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.107388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.107419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.107789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.107819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.108194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.108225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.108595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.108624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.109000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.109028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.109419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.109451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.109671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.109701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.110079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.110108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.110365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.110395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.110759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.110788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.111036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.111065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.111438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.111470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.696 [2024-11-29 13:16:58.111826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.696 [2024-11-29 13:16:58.111857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.696 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.112232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.112262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.112553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.112582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.112956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.112992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.113361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.113392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.113773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.113802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.114071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.114100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.114448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.114479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.114855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.114884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.115245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.115276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.115638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.115667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.116041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.116070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.116462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.116492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.116871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.116901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.117120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.117150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.117427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.117456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.117836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.117865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.118121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.118151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.118531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.118561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.118926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.118956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.119222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.119253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.119495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.119524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.119759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.119788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.697 qpair failed and we were unable to recover it. 00:32:55.697 [2024-11-29 13:16:58.120025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.697 [2024-11-29 13:16:58.120054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.120419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.120450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.120907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.120937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.121306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.121336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.121703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.121731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.121986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.122018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.122385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.122416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.122789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.122820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.123195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.123227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.123439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.123468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.123741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.123770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.124152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.124193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.124547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.124576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.124956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.124985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.125356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.125387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.125756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.125785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.126151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.126191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.126560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.126588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.126938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.126968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.127237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.127271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.127686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.127724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.128072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.128102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.128510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.128542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.128893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.128923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.129296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.129328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.129710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.129739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.130096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.130127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.130517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.130547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.130918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.130947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.131318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.131349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.698 [2024-11-29 13:16:58.131742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.698 [2024-11-29 13:16:58.131771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.698 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.132140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.132179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.132438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.132467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.132920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.132949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.133299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.133330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.133708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.133738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.134151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.134192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.134558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.134587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.134960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.134990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.135285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.135320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.135712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.135742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.136098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.136126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.136537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.136567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.136947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.136976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.137246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.137277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.137648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.137678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.138053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.138082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.138519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.138553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.138915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.138945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.139308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.139339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.139729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.139759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.140113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.140143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.140538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.140568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.140943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.140971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.141206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.141236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.141463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.141493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.141865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.141895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.142146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.142192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.142562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.699 [2024-11-29 13:16:58.142592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.699 qpair failed and we were unable to recover it. 00:32:55.699 [2024-11-29 13:16:58.142969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.142999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.143275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.143312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.143648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.143677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.144060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.144090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.144470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.144500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.144876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.144906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.145282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.145313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.145546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.145574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.145960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.145989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.146370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.146401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 [2024-11-29 13:16:58.146401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.146785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.146814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.146938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.146968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.147320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.147352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.147735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.147766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.148144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.148190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.148539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.148569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.148947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.148975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.149363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.149394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.149744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.149773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.150153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.150194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.150552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.150583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.150963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.150992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.151387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.151418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.151844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.151881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.152252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.152282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.152682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.152711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.153072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.153103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.153479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.700 [2024-11-29 13:16:58.153509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.700 qpair failed and we were unable to recover it. 00:32:55.700 [2024-11-29 13:16:58.153894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.153924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.154292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.154323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.154703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.154732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.155093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.155123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.155512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.155544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.155913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.155944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.156196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.156231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.156621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.156651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.157025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.157056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.157414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.157446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.157817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.157848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.158223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.158255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.158539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.158568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.158933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.158964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.159330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.159362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.159752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.159784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.160188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.160221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.160575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.160606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.160969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.160999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.161344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.161377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.161752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.161783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.162183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.162214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.162574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.162604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.162973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.163003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.163243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.163273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.163667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.163696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.164069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.164105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.164493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.164524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.164891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.164921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.701 qpair failed and we were unable to recover it. 00:32:55.701 [2024-11-29 13:16:58.165384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.701 [2024-11-29 13:16:58.165415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.165759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.165790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.166080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.166109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.166385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.166415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.166760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.166789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.166917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.166948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.167311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.167342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.167566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.167597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.167973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.168003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.168379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.168409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.168768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.168797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.169149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.169195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.169581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.169611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.170022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.170051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.170407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.170439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.170801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.170830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.171192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.171222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.171579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.171609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.171993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.172022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.172454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.172485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.172838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.172869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.173223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.173254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.173628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.173658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.174028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.174058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.174277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.174311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.174564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.174595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.174964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.174994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.175259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.175290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.175659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.175688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.176133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.176170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.702 qpair failed and we were unable to recover it. 00:32:55.702 [2024-11-29 13:16:58.176548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.702 [2024-11-29 13:16:58.176578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.176946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.176975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.177301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.177333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.177729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.177759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.178123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.178152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.178586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.178615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.178861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.178890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.179240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.179278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.179538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.179567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.179926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.179955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.180311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.180341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.180725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.180754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.180988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.181018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.181294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.181326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.181677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.181706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.182090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.182119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.182361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.182395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.182758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.182788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.183149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.183191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.183548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.183578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.183945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.183975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.184337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.184375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.184743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.184772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.185143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.185190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.185537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.185567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.703 [2024-11-29 13:16:58.185939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.703 [2024-11-29 13:16:58.185968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.703 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.186335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.186365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.186730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.186759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.187103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.187134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.187490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.187520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.187879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.187910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.188245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.188278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.188650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.188680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.189048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.189078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.189497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.189530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.189874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.189904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.190265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.190296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.190661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.190690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.191046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.191075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.191474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.191504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.191753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.191781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.192173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.192204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.192576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.192605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.192958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.192987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.193468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.193499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.193827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.193857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.194214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.194244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.194608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.194644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.195009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.195038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.195412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.195444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.195824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.195853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.196157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.196195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.196571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.196601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.196969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.196998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.197235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.197266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.197655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.197685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.704 qpair failed and we were unable to recover it. 00:32:55.704 [2024-11-29 13:16:58.198049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.704 [2024-11-29 13:16:58.198080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.198392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.198422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 [2024-11-29 13:16:58.198416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.198465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.705 [2024-11-29 13:16:58.198474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.705 [2024-11-29 13:16:58.198481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.705 [2024-11-29 13:16:58.198487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.705 [2024-11-29 13:16:58.198795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.198825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.199197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.199228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.199586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.199615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.199880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.199909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.200289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.200320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.200522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:55.705 [2024-11-29 13:16:58.200690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.200719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.200691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:55.705 [2024-11-29 13:16:58.200849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:55.705 [2024-11-29 13:16:58.200849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:55.705 [2024-11-29 13:16:58.201055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.201084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.201426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.201458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.201820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.201849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.202177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.202207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.202592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.202621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.202855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.202883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.203257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.203288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.203629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.203658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.204020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.204049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.204407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.204437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.204817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.204846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.205088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.205117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.205402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.205434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.205683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.205712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.206056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.206086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.206471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.206503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.206750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.206779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.705 [2024-11-29 13:16:58.207034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.705 [2024-11-29 13:16:58.207064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.705 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.207292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.207326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.207706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.207737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.208099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.208136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.208497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.208529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.208874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.208905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.209268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.209300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.209667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.209696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.210067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.210097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.210442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.210474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.210836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.210865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.211228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.211259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.211545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.211574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.211932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.211961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.212219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.212249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.212644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.212672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.213049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.213078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.213335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.213368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.213754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.213783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.214144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.214184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.214534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.214564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.214930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.214960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.215318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.215350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.215751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.215781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.216146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.216185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.216537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.216567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.216927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.216958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.217308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.217339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.217703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.217732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.218100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.218130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.218366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.218398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.706 qpair failed and we were unable to recover it. 00:32:55.706 [2024-11-29 13:16:58.218770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.706 [2024-11-29 13:16:58.218799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.219109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.219138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.219528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.219558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.219913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.219942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.220203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.220234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.220506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.220537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.220906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.220935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.221288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.221320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.221664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.221694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.222053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.222082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.222341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.222371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.222744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.222774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.223105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.223142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.223543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.223573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.223921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.223952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.224296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.224327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.224689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.224717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.225080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.225110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.225492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.225523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.225790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.225819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.226076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.226106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.226353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.226383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.226758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.226787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.227149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.227189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.227601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.227631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.228011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.228041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.228272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.228303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.228521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.228550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.228934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.228965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.707 qpair failed and we were unable to recover it. 00:32:55.707 [2024-11-29 13:16:58.229224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.707 [2024-11-29 13:16:58.229257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.229635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.229664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.229920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.229949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.230383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.230414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.230783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.230813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.231185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.231218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.231549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.231579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.231915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.231946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.232231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.232262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.232603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.232633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.233012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.233042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.233442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.233473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.233705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.233737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.233983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.234014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.234434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.234465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.234580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.234609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.235000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.235029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.235454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.235485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.235841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.235871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.236135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.236198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.236588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.236617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.236981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.237012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.237242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.237274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.237596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.237635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.237997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.238028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.238437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.238469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.238674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.238702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.239065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.239095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.239461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.239492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.239874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.239904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.708 qpair failed and we were unable to recover it. 00:32:55.708 [2024-11-29 13:16:58.240155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.708 [2024-11-29 13:16:58.240198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.240475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.240504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.240855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.240884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.241284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.241315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.241557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.241586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.241864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.241894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.242238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.242269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.242516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.242547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.242924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.242954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.243321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.243352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.243659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.243690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.244075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.244105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.244508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.244538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.244904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.245321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.245352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.245457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.245485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.245731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4e10 is same with the state(6) to be set 00:32:55.709 [2024-11-29 13:16:58.246338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.246447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.246900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.246938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.247439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.247476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.247817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.247849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.248242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.248277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.248542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.248570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.248939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.248968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.249282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.249316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.249654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.249684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.709 [2024-11-29 13:16:58.249949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.709 [2024-11-29 13:16:58.249979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.709 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.250335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.250368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.250716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.250747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.251014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.251051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.251293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.251325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.251689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.251718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.252044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.252073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.252324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.252356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.252672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.252709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.253044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.253081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.253294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.253327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.253701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.253733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.254093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.254122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.254318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.254348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.254709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.254737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.255091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.255120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.255534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.255564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.255939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.255968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.256341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.256372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.256581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.256611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.256877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.256906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.257247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.257279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.257559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.257597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.257937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.257967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.258232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.258262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.258692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.258722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.259115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.259145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.259511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.259542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.259910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.259939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.260294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.260325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.260565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.260594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.710 [2024-11-29 13:16:58.260884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.710 [2024-11-29 13:16:58.260913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.710 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.261194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.261225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.261595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.261625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.261861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.261890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.262272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.262310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.262667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.262697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.262956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.262984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.263327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.263358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.263601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.263630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.263919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.263948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.264211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.264243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.264695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.264724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.265105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.265135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.265542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.265574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.265929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.265960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.266306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.266339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.266730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.266760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.267089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.267120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.267528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.267561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.267939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.267971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.268217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.268248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.268362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.268390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.268763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.268792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.269194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.269227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.269597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.269627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.269995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.270024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.270361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.270392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.270647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.270678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.271080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.271109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.271476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.271510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.271747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.271778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.272142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.272185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.272557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.711 [2024-11-29 13:16:58.272586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.711 qpair failed and we were unable to recover it. 00:32:55.711 [2024-11-29 13:16:58.272810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.272841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.273111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.273142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.273527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.273557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.273764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.273792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.274207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.274239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.274577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.274607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.274972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.275001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.275365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.275396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.275766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.275797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.276028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.276057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.276408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.276440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.276658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.276687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.276909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.276944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.277339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.277369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.277750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.277780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.278146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.278187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.278545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.278574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.278939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.278969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.279330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.279360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.279737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.279766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.280135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.280172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.280495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.280524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.280880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.280910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.281137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.281202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.281457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.281487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.281764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.281793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.282169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.282200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.282620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.282650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.282886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.282914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.283242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.283273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.283660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.712 [2024-11-29 13:16:58.283691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.712 qpair failed and we were unable to recover it. 00:32:55.712 [2024-11-29 13:16:58.284040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.284069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.284415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.284445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.284665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.284693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.285084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.285114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.285502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.285532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.285943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.285973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.286345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.286376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.286749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.286779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.286987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.287025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.287365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.287396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.287620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.287649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.287891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.287922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.288307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.288338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.288704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.288735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.289106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.289136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.289523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.289554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.289907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.289938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.290339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.290372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.290824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.290853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.291232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.291263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.291621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.291650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.292033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.292062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.292459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.292490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.292726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.292755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.293140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.293178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.293511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.293541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.293910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.293940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.294218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.294248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.294642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.294671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.295041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.295071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.713 [2024-11-29 13:16:58.295432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.713 [2024-11-29 13:16:58.295464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.713 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.295840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.295869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.296243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.296273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.296523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.296551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.296925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.296954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.297309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.297341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.297562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.297593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.297806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.297835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.298203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.298233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.298512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.298541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.298915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.298944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.299196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.299226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.299607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.299636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.299996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.300027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.300399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.300431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.300554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.300582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.300914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.300943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.301320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.301350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.301549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.301577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.301963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.301999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.302252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.302282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.302499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.302527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.302876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.302905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.303142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.303188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.303483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.303512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.303838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.303867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.304114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.304143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.304550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.304581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.304935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.304966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.305364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.714 [2024-11-29 13:16:58.305396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.714 qpair failed and we were unable to recover it. 00:32:55.714 [2024-11-29 13:16:58.305754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.305783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.306206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.306236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.306602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.306631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.306988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.307018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.307385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.307415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.307783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.307812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.308201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.308231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.308640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.308673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.309016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.309046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.309419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.309451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.309837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.309868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.310248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.310280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.310675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.310706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.311017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.311046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.311390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.311421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.311777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.311807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.312184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.312226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.312463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.312494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.312746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.312775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.313142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.313199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.313549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.313579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.313960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.313988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.314170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.314202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.314453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.314482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.715 qpair failed and we were unable to recover it. 00:32:55.715 [2024-11-29 13:16:58.314827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.715 [2024-11-29 13:16:58.314857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.315224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.315258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.315629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.315657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.316006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.316035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.316420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.316450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.316801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.316830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.317191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.317224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.317562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.317592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.317926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.317955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.318304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.318333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.318705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.318734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.318964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.318992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.319398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.319429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.319796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.319825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.320073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.320101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.320544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.320574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.320933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.320962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.321338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.321369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.321579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.321607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.321985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.322017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.322299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.322333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.322553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.322582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.322800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.322829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.323190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.323219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.323575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.323605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.323942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.323970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.324234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.324266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.324513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.324542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.324860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.324899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.716 qpair failed and we were unable to recover it. 00:32:55.716 [2024-11-29 13:16:58.325279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.716 [2024-11-29 13:16:58.325308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.325697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.325726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.325967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.325996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.326380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.326409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.326795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.326830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.327055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.327088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.327427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.327458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.327802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.327831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.328047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.328076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.328437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.328468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.328845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.328874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.329124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.329156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.329540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.329570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.329952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.329984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.330369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.330398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.330766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.330794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.331156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.331202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.331565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.331594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.331951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.331981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.332206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.332241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.332520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.332550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.332914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.332944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.333173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.333204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.333617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.333647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.333881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.333912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.334277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.334308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.334656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.334687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.335043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.335072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.335435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.335465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.335799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.717 [2024-11-29 13:16:58.335828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.717 qpair failed and we were unable to recover it. 00:32:55.717 [2024-11-29 13:16:58.336198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.336230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.336608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.336636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.336896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.336925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.337250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.337280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.337711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.337741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.338110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.338139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.338522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.338551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.338909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.338938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.339298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.339327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.339699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.339728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.340094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.340124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.340504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.340534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.340901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.340930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.341205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.341237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.341578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.341608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.341975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.342005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.342375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.342406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.342572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.342609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.342808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.342837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.343235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.343265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.343651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.343682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.344036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.344066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.344503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.344533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.344843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.344872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.345240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.345271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.345643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.345673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.346040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.346069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.346460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.346493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.346849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.346879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.718 [2024-11-29 13:16:58.347246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.718 [2024-11-29 13:16:58.347279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.718 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.347639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.347670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.348030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.348059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.348337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.348367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.348724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.348754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.349146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.349191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.349429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.349457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.349812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.349843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.350212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.350244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.350578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.350608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.350904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.350933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.351307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.351337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.351550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.351578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.351794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.351828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.352179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.352209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.352448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.352478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.352850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.352880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.353244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.353275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.353477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.353506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.353860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.353892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.354244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.354275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.354520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.354549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.354823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.354852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.355233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.355266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.355588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.355617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.355730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.355762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.356126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.356156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.356288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.356320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.356579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.356607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.356943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.356972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.719 [2024-11-29 13:16:58.357237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.719 [2024-11-29 13:16:58.357266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.719 qpair failed and we were unable to recover it. 00:32:55.720 [2024-11-29 13:16:58.357481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.720 [2024-11-29 13:16:58.357510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.720 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.357869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.357903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.358268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.358299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.358667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.358700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.358955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.358984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.359345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.359378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.359757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.359788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.360182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.360213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.360415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.360445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.360808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.360838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.361092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.361121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.361377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.361411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.361770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.361800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.362227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.362259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.362585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.362613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.362983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.363012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.363385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.363415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.363674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.363702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.364066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.364097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.364447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.364478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.364853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.364882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.365218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.365249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.365621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.365651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.366011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.366047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.366395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.366426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.366673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.366705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.367027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.367058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.367241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.367272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.367673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.367703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.368108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.368137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.368512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.368543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.994 qpair failed and we were unable to recover it. 00:32:55.994 [2024-11-29 13:16:58.368908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.994 [2024-11-29 13:16:58.368938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.369196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.369229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.369623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.369660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.370029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.370057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.370298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.370329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.370699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.370728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.371061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.371090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.371332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.371362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.371717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.371747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.372117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.372146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.372511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.372541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.372853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.372882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.373239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.373269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.373618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.373648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.374010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.374039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.374393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.374424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.374810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.374839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.375191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.375230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.375641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.375671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.376024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.376059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.376410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.376440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.376835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.376865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.377238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.377268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.377486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.377515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.377910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.377939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.378212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.378243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.378619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.378648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.378987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.379017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.379377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.379407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.379776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.379805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.380034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.380063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.380301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.380330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.380554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.380583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.380838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.380870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.381215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.381245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.381608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.381637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.381862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.381891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.382240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.382270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.995 qpair failed and we were unable to recover it. 00:32:55.995 [2024-11-29 13:16:58.382554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.995 [2024-11-29 13:16:58.382588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.382827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.382857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.383262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.383292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.383663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.383693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.384048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.384077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.384439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.384469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.384822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.384851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.385205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.385236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.385619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.385648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.386009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.386038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.386395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.386427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.386789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.386817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.387153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.387196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.387562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.387591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.387852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.387885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.388119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.388147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.388401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.388431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.388772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.388803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.389155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.389195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.389548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.389578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.389746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.389776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.390144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.390198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.390565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.390609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.390853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.390881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.391221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.391252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.391598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.391628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.391998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.392027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.392386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.392416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.392795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.392824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.393052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.393081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.393460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.393491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.393748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.393780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.394033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.394064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.394311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.394342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.394696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.394725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.394865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.394895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.395289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.395321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.395695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.395725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.396084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.996 [2024-11-29 13:16:58.396114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.996 qpair failed and we were unable to recover it. 00:32:55.996 [2024-11-29 13:16:58.396478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.396509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.396869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.396899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.397156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.397194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.397564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.397595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.397922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.397952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.398304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.398335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.398706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.398735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.399091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.399120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.399354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.399385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.399731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.399764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.399993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.400029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.400256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.400288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.400511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.400541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.400829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.400859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.401218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.401271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.401655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.401685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.402053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.402084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.402453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.402483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.402841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.402871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.403293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.403325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.403661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.403690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.404043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.404074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.404412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.404444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.404799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.404828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.405067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.405097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.405491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.405523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.405877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.405906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.406278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.406309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.406718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.406749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.407099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.407129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.407373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.407407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.407751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.407780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.408147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.408189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.408562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.408592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.408954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.408985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.409356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.409388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.409596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.409625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.409877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.409907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.997 [2024-11-29 13:16:58.410148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.997 [2024-11-29 13:16:58.410204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.997 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.410563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.410591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.410867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.410896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.411253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.411283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.411642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.411671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.412128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.412157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.412473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.412502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.412721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.412749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.413109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.413137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.413384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.413413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.413623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.413650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.413903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.413932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.414293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.414323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.414556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.414592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.414920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.414951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.415184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.415214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.415541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.415570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.415946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.415974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.416353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.416383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.416756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.416786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.417118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.417148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.417506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.417535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.417699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.417727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.417956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.417984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.418401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.418432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.418782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.418812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.419022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.419050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.419408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.419438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.419806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.419834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.420215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.420246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.420590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.420619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.420994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.421023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.998 [2024-11-29 13:16:58.421418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.998 [2024-11-29 13:16:58.421455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.998 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.421818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.421848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.422212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.422243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.422494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.422523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.422889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.422921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.423275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.423304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.423682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.423711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.424078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.424108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.424362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.424402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.424739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.424768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.424864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.424892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.425232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.425262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.425646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.425675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.426041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.426070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.426298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.426329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.426561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.426590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.426960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.426989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.427331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.427362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.427730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.427759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.428139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.428177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.428416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.428445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.428813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.428843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.429227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.429258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.429617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.429647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.430017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.430046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.430406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.430437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.430814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.430843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.431205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.431237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.431475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.431504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.431855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.431886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.432255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.432285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.432662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.432691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.432957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.432985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.433335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.433365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.433719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.433748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.434080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.434111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.434403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.434434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.434657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.434685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.435039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.999 [2024-11-29 13:16:58.435068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:55.999 qpair failed and we were unable to recover it. 00:32:55.999 [2024-11-29 13:16:58.435344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.435374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.435621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.435650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.436019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.436048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.436411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.436441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.436816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.436845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.437073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.437104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.437355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.437385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.437754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.437783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.438001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.438030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.438403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.438433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.438659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.438695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.439067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.439097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.439473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.439505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.439891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.439920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.440285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.440314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.440683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.440714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.441075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.441105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.441336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.441366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.441730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.441759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.442118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.442148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.442512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.442542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.442903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.442933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.443294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.443324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.443693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.443723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.444087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.444116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.444480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.444509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.444885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.444914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.445283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.445313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.445678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.445706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.446076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.446105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.446488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.446518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.446883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.446911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.447277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.447307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.447660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.447689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.448053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.448081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.448438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.448467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.448836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.448865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.449226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.449256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.449600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.000 [2024-11-29 13:16:58.449629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.000 qpair failed and we were unable to recover it. 00:32:56.000 [2024-11-29 13:16:58.450006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.450035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.450268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.450298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.450570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.450602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.450955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.450985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.451338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.451368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.451736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.451765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.452008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.452037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.452418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.452448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.452790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.452818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.453184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.453213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.453579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.453609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.453978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.454007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.454383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.454415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.454761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.454792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.455135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.455187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.455425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.455454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.455803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.455833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.456206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.456236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.456484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.456512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.456900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.456929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.457296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.457328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.457671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.457700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.458071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.458100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.458463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.458493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.458734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.458763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.459112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.459141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.459510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.459539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.459884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.459913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.460155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.460196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.460429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.460462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.460828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.460857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.461232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.461262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.461594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.461625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.461969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.461997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.462218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.462247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.462593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.462622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.462981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.463012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.463382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.463412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.001 [2024-11-29 13:16:58.463778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.001 [2024-11-29 13:16:58.463807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.001 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.464177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.464219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.464648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.464678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.465022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.465051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.465420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.465451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.465809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.465837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.466064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.466092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.466503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.466533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.466893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.466922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.467292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.467321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.467682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.467711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.467959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.467987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.468376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.468407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.468760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.468788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.469147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.469188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.469397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.469426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.469790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.469820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.470201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.470232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.470597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.470626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.471000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.471029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.471377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.471408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.471642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.471670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.472022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.472052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.472399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.472430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.472811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.472840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.473203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.473233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.473473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.473506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.473849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.473879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.474237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.474268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.474644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.474674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.475105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.475133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.475523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.475553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.475906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.475936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.476405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.476442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.476964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.476997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.477207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.477237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.477578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.477607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.477944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.477973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.002 [2024-11-29 13:16:58.478338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.002 [2024-11-29 13:16:58.478369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.002 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.478742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.478770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.479132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.479187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.479552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.479581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.479810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.479839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.480134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.480177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.480398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.480427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.480875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.480905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.481121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.481151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.481518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.481548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.481899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.481929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.482280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.482311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.482683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.482713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.483065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.483096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.483443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.483474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.483741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.483770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.484122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.484151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.484511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.484542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.484766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.484796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.485137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.485174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.485535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.485565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.485949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.485979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.486372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.486402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.486733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.486762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.487132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.487181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.487577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.487607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.487999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.488030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.488383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.488421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.488673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.488703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.489060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.489090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.489483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.489514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.489724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.489759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.490020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.003 [2024-11-29 13:16:58.490049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.003 qpair failed and we were unable to recover it. 00:32:56.003 [2024-11-29 13:16:58.490402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.490433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.490781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.490812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.491194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.491224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.491581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.491611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.491793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.491823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.492106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.492135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.492531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.492560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.492929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.492959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.493295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.493325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.493580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.493614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.493953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.493983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.494352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.494383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.494606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.494636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.494955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.494986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.495402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.495433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.495652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.495681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.496115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.496145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.496531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.496561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.496929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.496960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.497320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.497351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.497711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.497740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.498139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.498182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.498554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.498584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.498804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.498833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.499116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.499147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.499562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.499592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.499959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.499988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.500369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.500402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.500499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.500530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.501033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.501144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.501575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.501684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.502014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.502051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.502451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.502558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.503026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.503065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.503452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.503485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.503844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.503876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.504128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.504171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.504520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.504552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.004 [2024-11-29 13:16:58.504919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.004 [2024-11-29 13:16:58.504948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.004 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.505224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.505276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.505596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.505636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.505855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.505886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.506333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.506366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.506724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.506763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.507155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.507194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.507534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.507564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.507962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.507991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.508393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.508425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.508663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.508692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.509093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.509122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.509531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.509562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.509914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.509944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.510337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.510377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.510760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.510791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.511246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.511276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.511608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.511638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.511922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.511952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.512199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.512231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.512593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.512623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.512845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.512874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.513221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.513252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.513642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.513671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.514043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.514073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.514481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.514512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.514851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.514881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.515235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.515265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.515609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.515640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.515979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.516008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.516350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.516383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.516738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.516767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.517132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.517174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.517391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.517421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.517795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.517823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.518154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.518195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.518478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.518508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.518878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.518909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.005 [2024-11-29 13:16:58.519130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.005 [2024-11-29 13:16:58.519170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.005 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.519444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.519474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.519827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.519857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.520238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.520270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.520653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.520683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.520927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.520956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.521338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.521369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.521629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.521659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.522004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.522034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.522385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.522417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.522634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.522663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.523062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.523092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.523379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.523408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.523766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.523797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.523895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.523925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Write completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Write completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Write completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Write completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Write completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Write completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Write completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Write completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 Read completed with error (sct=0, sc=8) 00:32:56.006 starting I/O failed 00:32:56.006 [2024-11-29 13:16:58.524773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:56.006 [2024-11-29 13:16:58.525128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.525208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.525636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.525740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.526033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.526072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.526445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.526550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.526961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.526999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.527339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.527372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.527726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.527757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.527989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.528019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.528270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.528301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.528662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.528692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.529078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.529107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.529535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.529567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.529926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.006 [2024-11-29 13:16:58.529956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.006 qpair failed and we were unable to recover it. 00:32:56.006 [2024-11-29 13:16:58.530340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.530370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.530604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.530635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.530916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.530945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.531297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.531327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.531691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.531720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.532082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.532112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.532473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.532504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.532726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.532755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.532978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.533008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.533367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.533399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.533761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.533790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.534198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.534229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.534498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.534528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.534818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.534849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.535230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.535260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.535504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.535533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.535776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.535803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.536196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.536226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.536586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.536617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.536991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.537022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.537254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.537284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.537689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.537720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.538107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.538136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.538414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.538446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.538820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.538850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.539126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.539170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.539516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.539547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.539896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.539924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.540297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.540328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.540687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.540716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.541082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.541112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.541496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.541525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.541902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.541932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.542157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.542209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.007 qpair failed and we were unable to recover it. 00:32:56.007 [2024-11-29 13:16:58.542565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.007 [2024-11-29 13:16:58.542594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.542846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.542874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.543258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.543290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.543528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.543557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.543768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.543798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.544148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.544188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.544586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.544616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.544884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.544913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.545319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.545349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.545645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.545674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.546007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.546037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.546408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.546438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.546826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.546855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.547250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.547280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.547668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.547703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.548087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.548115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.548594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.548625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.548975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.549004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.549283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.549313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.549694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.549723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.549958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.549986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.550374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.550404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.550775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.550803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.551179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.551209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.551550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.551580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.551905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.551934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.552318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.552347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.552714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.552742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.553105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.553135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.553382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.553412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.553774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.553802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.554204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.554235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.554376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.554405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.554669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.554698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.555050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.555081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.555454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.555484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.555852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.555881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.008 [2024-11-29 13:16:58.556137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.008 [2024-11-29 13:16:58.556174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.008 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.556576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.556604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.556966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.556995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.557270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.557299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.557681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.557711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.558011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.558039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.558394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.558424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.558783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.558812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.559064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.559096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.559443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.559474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.559625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.559653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.559999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.560030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.560256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.560286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.560615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.560652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.561014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.561043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.561376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.561405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.561839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.561870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.562222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.562259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.562615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.562645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.563002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.563032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.563298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.563328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.563712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.563741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.564108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.564140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.564403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.564433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.564818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.564846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.565078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.565106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.565488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.565518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.565862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.565892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.566279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.566308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.566660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.566697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.566794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.566824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.567171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.567202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.567436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.567465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.567835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.567865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.568092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.568121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.568496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.568526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.568788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.568820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.569190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.569221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.569466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.009 [2024-11-29 13:16:58.569495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.009 qpair failed and we were unable to recover it. 00:32:56.009 [2024-11-29 13:16:58.569748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.569781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.570132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.570173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.570541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.570570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.570932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.570961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.571322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.571352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.571703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.571731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.572108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.572137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.572574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.572605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.572961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.572990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.573337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.573368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.573713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.573743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.574111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.574139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.574549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.574579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.574930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.574960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.575337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.575368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.575586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.575616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.575861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.575890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.576223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.576253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.576618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.576653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.576987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.577018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.577383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.577413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.577757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.577788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.578156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.578196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.578400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.578428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.578805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.578834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.579211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.579241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.579624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.579654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.579983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.580013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.580311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.580340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.580481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.580515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.580735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.580764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.581137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.581186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.581528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.581557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.581812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.581842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.582254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.582284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.582629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.582659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.583026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.583055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.583420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.010 [2024-11-29 13:16:58.583451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.010 qpair failed and we were unable to recover it. 00:32:56.010 [2024-11-29 13:16:58.583816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.583846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.584215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.584245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.584570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.584600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.584974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.585004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.585230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.585265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.585617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.585646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.585998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.586028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.586149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.586201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.586579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.586611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.586967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.586996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.587227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.587260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.587631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.587661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.588037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.588066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.588452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.588482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.588856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.588887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.589244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.589275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.589655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.589684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.590032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.590064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.590417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.590447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.590813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.590842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.591222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.591259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.591708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.591737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.592110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.592145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.592365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.592394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.592620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.592648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.593016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.593044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.593411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.593441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.593654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.593682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.593894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.593922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.594275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.594306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.594686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.594714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.595082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.595110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.595363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.595394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.595767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.595798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.596043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.596076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.596446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.596478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.596847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.596875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.597241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.011 [2024-11-29 13:16:58.597272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.011 qpair failed and we were unable to recover it. 00:32:56.011 [2024-11-29 13:16:58.597513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.597543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.597917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.597947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.598174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.598203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.598526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.598555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.598918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.598946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.599174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.599203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.599568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.599596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.599943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.599973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.600341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.600371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.600750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.600780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.601148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.601184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.601551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.601580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.601942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.601971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.602337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.602367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.602592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.602624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.602996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.603025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.603362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.603393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.603757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.603786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.604154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.604193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.604529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.604558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.604926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.604955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.605320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.605350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.605697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.605737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.606087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.606117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.606349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.606380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.606746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.606775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.606996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.607024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.607248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.607279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.607648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.607680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.607900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.607928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.608299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.608331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.608695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.608725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.012 [2024-11-29 13:16:58.609089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.012 [2024-11-29 13:16:58.609118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.012 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.609505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.609536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.609893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.609921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.610291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.610321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.610684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.610714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.611077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.611106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.611462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.611493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.611862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.611892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.612244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.612274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.612517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.612546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.612935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.612964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.613266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.613296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.613669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.613698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.614075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.614105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.614489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.614520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.614753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.614786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.615139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.615181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.615399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.615428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.615560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.615588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.615948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.615978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.616342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.616372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.616596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.616624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.617004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.617033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.617365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.617396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.617763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.617791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.618146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.618187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.618556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.618586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.618951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.618979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.619250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.619279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.619626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.619655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.619865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.619899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.620262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.620292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.620533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.620563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.620908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.620936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.621291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.621322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.621561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.621590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.621871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.621899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.013 [2024-11-29 13:16:58.622243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.013 [2024-11-29 13:16:58.622273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.013 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.622662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.622691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.623046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.623077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.623424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.623456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.623812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.623841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.624306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.624336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.624655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.624684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.625045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.625074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.625351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.625381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.625747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.625776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.625990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.626018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.626346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.626377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.626725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.626756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.627123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.627151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.627559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.627589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.627936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.627965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.628177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.628208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.628434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.628463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.628810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.628839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.629092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.629121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.629376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.629407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.629783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.629812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.630195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.630225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.630585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.630615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.630979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.631007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.631232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.631261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.631659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.631688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.632059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.632088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.632452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.632482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.632860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.632889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.633144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.633179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.633421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.633450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.633764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.633792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.634126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.634168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.634581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.634611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.634792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.634819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.635203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.635233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.635613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.635642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.635995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.014 [2024-11-29 13:16:58.636026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.014 qpair failed and we were unable to recover it. 00:32:56.014 [2024-11-29 13:16:58.636387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.636418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.636746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.636776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.636990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.637018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.637383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.637412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.637620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.637648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.638015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.638044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.638282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.638315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.638666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.638696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.639062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.639092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.639481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.639511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.639872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.639901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.640264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.640295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.640625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.640655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.640870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.640898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.641262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.641293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.641518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.641546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.641759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.641787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.642123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.642151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.642535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.642564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.642925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.642955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.643367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.643398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.643625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.643655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.643875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.643902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.644275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.644305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.644642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.644681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.644927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.644957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.645183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.645214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.645576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.645605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.645976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.646006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.646218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.646248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.646631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.646661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.647018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.647046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.647262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.647291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.647670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.647700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.647943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.647977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.648345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.648375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.648714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.648745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.649094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.015 [2024-11-29 13:16:58.649122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.015 qpair failed and we were unable to recover it. 00:32:56.015 [2024-11-29 13:16:58.649537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.649570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.649786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.649815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.650194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.650223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.650618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.650647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.651011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.651041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.651140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.651186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.651513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.651542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.651913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.651943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.652305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.652336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.652698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.652726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.652965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.652994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.653395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.653425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.653781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.653810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.654025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.654053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.654435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.654465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.654817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.654845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.655063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.655090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.655334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.655367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.655719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.655748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.656126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.656156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.656532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.656563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.656930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.656959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.657313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.657344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.657758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.657787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.658141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.658177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.658520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.658550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.658914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.658942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.659210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.659239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.016 [2024-11-29 13:16:58.659573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.016 [2024-11-29 13:16:58.659601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.016 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.659968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.660000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.660298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.660328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.660551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.660579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.660966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.660997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.661223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.661253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.661594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.661632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.662010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.662039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.662404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.662443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.662803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.662832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.663047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.663077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.663432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.663464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.663845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.663874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.664235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.664266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.664649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.664678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.665020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.665051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.665321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.665351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.665728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.665759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.666131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.666167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.666539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.666567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.666934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.666964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.667185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.667216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.667548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.667577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.667955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.667984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.291 [2024-11-29 13:16:58.668335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.291 [2024-11-29 13:16:58.668366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.291 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.668735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.668765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.669117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.669146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.669391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.669422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.669682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.669711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.670083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.670115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.670473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.670504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.670868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.670897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.671244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.671274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.671529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.671558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.671782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.671810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.672191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.672223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.672562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.672599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.672937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.672966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.673232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.673262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.673597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.673625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.673990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.674019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.674253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.674283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.674629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.674658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.675029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.675058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.675415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.675445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.675803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.675832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.676186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.676216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.676431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.676459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.676812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.676856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.677243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.677277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.677661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.677691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.678060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.678089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.678456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.292 [2024-11-29 13:16:58.678486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.292 qpair failed and we were unable to recover it. 00:32:56.292 [2024-11-29 13:16:58.678844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.678875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.679245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.679276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.679638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.679668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.680028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.680056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.680414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.680445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.680828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.680859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.681226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.681258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.681467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.681496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.681821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.681851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.682220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.682250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.682595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.682632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.682994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.683023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.683388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.683418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.683779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.683807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.684197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.684229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.684458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.684487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.684882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.684910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.685282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.685312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.685679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.685707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.685943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.685979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.686220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.686249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.686590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.686620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.686939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.686971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.687313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.687345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.687719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.687749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.688113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.688144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.293 [2024-11-29 13:16:58.688528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.293 [2024-11-29 13:16:58.688558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.293 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.688912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.688941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.689192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.689223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.689577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.689607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.689993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.690024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.690392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.690422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.690774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.690803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.691178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.691208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.691563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.691592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.691948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.691983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.692342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.692373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.692774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.692805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.693017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.693046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.693392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.693421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.693793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.693821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.694195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.694225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.694586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.694616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.694891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.694920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.695039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.695071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.695368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.695404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.695798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.695828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.696204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.696235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.696640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.696670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.697045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.697075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.697474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.697504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.697869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.697897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.698271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.698301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.698670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.698699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.698921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.698951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.699283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.699315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.294 [2024-11-29 13:16:58.699658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.294 [2024-11-29 13:16:58.699688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.294 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.700053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.700082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.700350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.700380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.700732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.700762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.701127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.701156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.701518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.701548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.701908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.701938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.702321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.702353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.702733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.702762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.703113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.703142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.703517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.703547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.703768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.703798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.704141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.704180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.704519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.704548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.704926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.704954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.705327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.705358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.705582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.705610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.705895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.705925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.706289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.706319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.706673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.706708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.707065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.707095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.707479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.707509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.707879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.707907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.708249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.708279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.708645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.708674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.708886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.708914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.709174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.709205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.709459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.709491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.709706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.709734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.710095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.710124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.710464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.710496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.295 qpair failed and we were unable to recover it. 00:32:56.295 [2024-11-29 13:16:58.710705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.295 [2024-11-29 13:16:58.710733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.711094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.711124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.711515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.711546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.711752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.711780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.712011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.712042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.712265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.712298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.712543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.712572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.712954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.712984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.713220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.713250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.713363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.713395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.713760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.713789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.714155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.714197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.714558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.714587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.714972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.715002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.715395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.715425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.715809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.715838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.716080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.716109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.716479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.716509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.716857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.716893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.717260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.717291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.717631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.717661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.718026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.718055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.718407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.718437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.718770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.718800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.719181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.719210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.719561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.719589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.719949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.719979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.720074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.720102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.720614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.720737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.296 qpair failed and we were unable to recover it. 00:32:56.296 [2024-11-29 13:16:58.721199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.296 [2024-11-29 13:16:58.721241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.721711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.721818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.722384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.722492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.722820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.722859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.723435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.723541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.723992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.724031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.724279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.724313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.724682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.724713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.725083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.725113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.725370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.725401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.725793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.725823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.726105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.726139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.726294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.726324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.726686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.726716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.726965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.726995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.727242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.727278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.727559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.727589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.727955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.727985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.728360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.728393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.728739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.728770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.729114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.729143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.729533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.729563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.729905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.297 [2024-11-29 13:16:58.729934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.297 qpair failed and we were unable to recover it. 00:32:56.297 [2024-11-29 13:16:58.730219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.730249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.730477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.730507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.730942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.730971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.731229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.731269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.731605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.731636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.731993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.732023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.732382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.732412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.732740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.732773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.732995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.733025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.733377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.733407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.733619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.733648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.733892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.733925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.734305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.734336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.734714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.734744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.734953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.734982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.735346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.735376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.735533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.735562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.735816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.735845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.736247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.736278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.736494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.736524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.736903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.736932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.737305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.737335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.737701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.737731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.738096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.738126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.738507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.738537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.738885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.738915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.739309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.739341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.298 [2024-11-29 13:16:58.739768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.298 [2024-11-29 13:16:58.739798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.298 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.740148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.740201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.740455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.740484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.740860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.740890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.741282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.741314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.741702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.741734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.742151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.742212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.742571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.742602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.742841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.742870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.743259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.743290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.743446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.743474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.743915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.743945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.744282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.744313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.744668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.744698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.744961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.744990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.745343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.745373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.745721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.745759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.745991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.746023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.746368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.746399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.746650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.746679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.747022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.747051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.747421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.747461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.747832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.747861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.748231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.748262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.748655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.748686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.748923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.748954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.749341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.749373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.749782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.749811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.750178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.750209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.750447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.299 [2024-11-29 13:16:58.750479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.299 qpair failed and we were unable to recover it. 00:32:56.299 [2024-11-29 13:16:58.750860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.750892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.751243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.751275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.751633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.751662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.752044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.752074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.752301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.752331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.752725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.752755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.753121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.753151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.753414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.753445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.753669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.753697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.754080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.754109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.754477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.754508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.754870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.754900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.755140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.755182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.755458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.755488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.755825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.755855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.756236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.756267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.756588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.756625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.756862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.756896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.757150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.757190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.757447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.757477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.757737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.757768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.758142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.758180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.758550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.758579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.758825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.758854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.759183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.759214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.759543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.759572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.759940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.759977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.300 [2024-11-29 13:16:58.760337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.300 [2024-11-29 13:16:58.760368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.300 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.760634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.760663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.761014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.761044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.761410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.761440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.761719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.761749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.762121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.762155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.762422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.762452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.762675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.762705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.763074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.763103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.763379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.763410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.763764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.763794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.764022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.764053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.764398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.764430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.764773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.764804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.765151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.765189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.765619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.765649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.765868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.765897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.766259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.766290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.766728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.766759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.766933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.766963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.767249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.767280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.767662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.767692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.767917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.767947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.768198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.768231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.768581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.768610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.768851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.768880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.769232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.769266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.301 [2024-11-29 13:16:58.769643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.301 [2024-11-29 13:16:58.769674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.301 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.770032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.770063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.770430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.770460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.770808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.770839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.771200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.771232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.771616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.771646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.772011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.772040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.772398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.772428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.772755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.772785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.773152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.773208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.773326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.773362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.773751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.773782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.774149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.774204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.774411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.774442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.774686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.774717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.775061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.775091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.775467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.775498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.775863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.775893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.776135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.776171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.776528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.776562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.776792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.776822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.777179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.777212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.777516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.777546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.777928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.777959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.778203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.778236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.778529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.778558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.778922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.778954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.779329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.302 [2024-11-29 13:16:58.779362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.302 qpair failed and we were unable to recover it. 00:32:56.302 [2024-11-29 13:16:58.779729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.779758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.779937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.779972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.780219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.780251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.780369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.780402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.780725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.780755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.780993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.781024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.781344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.781375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.781714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.781745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.782088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.782118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.782511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.782543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.782896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.782928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.783148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.783189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.783600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.783632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.783965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.783996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.784353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.784385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.784748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.784779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.785154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.785205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.785605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.785635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.785866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.785898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.786270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.786301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.786708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.786738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.787073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.787103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.787476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.787508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.787865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.787896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.303 [2024-11-29 13:16:58.788237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.303 [2024-11-29 13:16:58.788274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.303 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.788719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.788750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.789107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.789138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.789545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.789577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.789936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.789967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.790256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.790287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.790517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.790548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.790681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.790711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.791142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.791184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.791431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.791463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.791861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.791892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.792328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.792359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.792701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.792730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.792978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.793007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.793247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.793282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.793470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.793499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.793726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.793757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.794120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.794152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.794532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.794563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.794906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.794935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.795295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.795326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.795684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.795713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.795936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.795966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.796252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.796284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.796513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.796543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.796916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.796946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.797305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.797337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.797712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.304 [2024-11-29 13:16:58.797741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.304 qpair failed and we were unable to recover it. 00:32:56.304 [2024-11-29 13:16:58.797985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.798015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.798354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.798385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.798627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.798656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.799023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.799053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.799331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.799361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.799756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.799786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.800176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.800207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.800577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.800609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.800862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.800892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.801225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.801256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.801413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.801443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.801789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.801818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.802070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.802110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.802439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.802473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.802823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.802855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.803214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.803244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.803497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.803526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.803846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.803878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.804116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.804145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.804371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.804402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.804774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.804803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.805198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.805231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.805595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.805626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.805852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.805885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.806277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.806310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.806694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.806726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.807100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.807131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.807536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.807566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.807907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.807945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.808179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.808211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.808507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.808536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.808898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.305 [2024-11-29 13:16:58.808928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.305 qpair failed and we were unable to recover it. 00:32:56.305 [2024-11-29 13:16:58.809191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.809225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.809592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.809622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.809859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.809889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.810106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.810137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.810505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.810538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.810907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.810937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.811151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.811193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.811558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.811589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.811956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.811987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.812339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.812372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.812719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.812750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.812993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.813021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.813388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.813417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.813788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.813817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.814206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.814238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.814618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.814648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.814922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.814951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.815180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.815211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.815480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.815510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.815891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.815922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.816286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.816327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.816685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.816714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.816929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.816958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.817336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.817366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.817733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.817763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.818140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.818189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.818534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.818573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.818936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.818965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.819326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.819358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.819692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.306 [2024-11-29 13:16:58.819722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.306 qpair failed and we were unable to recover it. 00:32:56.306 [2024-11-29 13:16:58.820090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.820120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.820368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.820399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.820646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.820675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.821018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.821049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.821432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.821465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.821835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.821867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.822250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.822280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.822607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.822638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.823079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.823108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.823331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.823361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.823646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.823679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.824066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.824096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.824459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.824491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.824852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.824882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.825240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.825272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.825481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.825510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.825876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.825906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.826137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.826187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.826588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.826617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.826974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.827003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.827467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.827498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.827719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.827749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.828124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.828154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.828516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.828546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.828920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.828950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.829063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.829095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.829445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.829476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.307 [2024-11-29 13:16:58.829855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.307 [2024-11-29 13:16:58.829884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.307 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.830238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.830269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.830646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.830677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.831043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.831078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.831472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.831503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.831852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.831881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.832127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.832183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.832323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.832352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.832732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.832762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.832996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.833027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.833391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.833421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.833777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.833806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.834180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.834211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.834538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.834567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.834662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.834690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d8000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.835209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.835316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.835725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.835763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.836234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.836294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.836685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.836716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.836983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.837013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.837376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.837409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.837754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.837784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.838156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.838202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.838551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.838580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.838840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.838870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.839243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.839275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.839626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.839658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.840030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.840060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.308 [2024-11-29 13:16:58.840296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.308 [2024-11-29 13:16:58.840325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.308 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.840575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.840604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.840980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.841013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.841235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.841265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.841535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.841569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.841954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.841985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.842340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.842371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.842765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.842794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.843131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.843186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.843554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.843583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.843862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.843890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.844210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.844243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.844373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.844402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.844650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.844678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.844899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.844928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.845166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.845210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.845552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.845582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.845934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.845964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.846348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.846379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.846731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.846762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.847126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.847155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.847522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.847552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.847966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.847996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.848328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.848358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.848694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.848725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.848980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.309 [2024-11-29 13:16:58.849009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.309 qpair failed and we were unable to recover it. 00:32:56.309 [2024-11-29 13:16:58.849404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.849436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.849782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.849811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.850157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.850196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.850314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.850342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.850742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.850772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.851134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.851175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.851512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.851542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.851776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.851805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.852196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.852227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.852459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.852492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.852863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.852892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.853114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.853144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.853535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.853565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.853948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.853978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.854228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.854258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.854536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.854565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.854672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.854702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.854960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.854992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.855303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.855335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.855687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.855717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.856086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.856117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.856376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.856406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.856781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.856811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.857155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.857193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.857531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.857560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.857792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.857822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.858225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.858255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.310 qpair failed and we were unable to recover it. 00:32:56.310 [2024-11-29 13:16:58.858630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.310 [2024-11-29 13:16:58.858659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.858897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.858925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.859322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.859359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.859583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.859611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.859886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.859915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.860329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.860360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.860689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.860718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.860948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.860976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.861326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.861356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.861692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.861728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.862056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.862085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.862316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.862345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.862567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.862596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.862968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.862995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.863246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.863279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.863550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.863578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.863950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.863982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.864363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.864394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.864608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.864637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.864871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.864900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.865280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.865310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.865631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.865661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.866033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.311 [2024-11-29 13:16:58.866063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:56.311 [2024-11-29 13:16:58.866465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.866495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:56.311 [2024-11-29 13:16:58.866867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.866897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:56.311 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:56.311 [2024-11-29 13:16:58.867285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.867315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.867695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.311 [2024-11-29 13:16:58.867724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.311 qpair failed and we were unable to recover it. 00:32:56.311 [2024-11-29 13:16:58.867958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.867988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.868385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.868417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.868780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.868810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.869183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.869215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.869585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.869614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.869983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.870012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.870125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.870154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f00d0000b90 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.870601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.870711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.871191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.871234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.871730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.871839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.872415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.872524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.873010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.873050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.873545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.873653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.874105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.874157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.874639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.874671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.874798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.874827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.875180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.875213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.875597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.875629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.875877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.875909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.876284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.876320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.876681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.876713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.877020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.877049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.877499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.877531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.877929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.877959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.878294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.878326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.878577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.878611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.878958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.878994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.879270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.879301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.879665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.879696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.879913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.879942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.880192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.880224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.880551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.880583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.880919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.312 [2024-11-29 13:16:58.880956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.312 qpair failed and we were unable to recover it. 00:32:56.312 [2024-11-29 13:16:58.881179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.881216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.881434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.881465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.881816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.881847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.882192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.882222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.882558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.882588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.882940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.882970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.883342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.883372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.883743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.883776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.884129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.884177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.884520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.884553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.884780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.884809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.885058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.885087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.885308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.885338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.885711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.885743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.886080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.886111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.886455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.886487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.886846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.886876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.887233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.887263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.887617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.887646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.888015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.888045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.888286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.888316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.888688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.888724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.889086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.889116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.889380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.889413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.889787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.889816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.890202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.890236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.890611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.890641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.890898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.890929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.891181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.891215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.891580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.891611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.891943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.891976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.892220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.892255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.892745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.892776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.893131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.893168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.893415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.893448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.893819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.893850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.894196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.894228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.894629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.894659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.313 qpair failed and we were unable to recover it. 00:32:56.313 [2024-11-29 13:16:58.895013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.313 [2024-11-29 13:16:58.895044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.895259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.895291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.895668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.895698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.896044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.896076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.896419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.896455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.896833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.896864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.897239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.897269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.897650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.897681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.898048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.898080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.898440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.898472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.898822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.898855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.899081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.899113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.899507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.899538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.899892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.899924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.900283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.900313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.900695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.900726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.901052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.901081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.901446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.901477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.901848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.901878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.902235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.902266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.902517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.902548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.902894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.902924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.903147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.903187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.903593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.903622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.903961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.903992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.904375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.904408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.904774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.904803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.905030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.905059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.905400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.905432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.905863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.905892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.906253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.906283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.906524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.906553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.906906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.906939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.314 [2024-11-29 13:16:58.907177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.314 [2024-11-29 13:16:58.907208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.314 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.907447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.907477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.907801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.907833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.908082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.908111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.908503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.908535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.908763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.908793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.909029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.909060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.909421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.909452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.315 [2024-11-29 13:16:58.909814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.909848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:56.315 [2024-11-29 13:16:58.910126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.910170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.315 [2024-11-29 13:16:58.910544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.910576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:56.315 [2024-11-29 13:16:58.910837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.910872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.911192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.911224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.911634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.911665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.912008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.912039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.912225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.912254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.912605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.912641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.912984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.913014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.913237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.913267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.913613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.913642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.913986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.914017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.914244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.914275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.914528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.914557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.914930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.914958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.915331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.915361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.915725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.915753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.915974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.916005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.916410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.916440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.916807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.916834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.917213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.917243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.917644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.917673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.918038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.918067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.918413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.918443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.918653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.918681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.919050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.919079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.919432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.315 [2024-11-29 13:16:58.919463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.315 qpair failed and we were unable to recover it. 00:32:56.315 [2024-11-29 13:16:58.919842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.919871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.920234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.920263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.920619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.920650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.920990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.921020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.921254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.921284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.921531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.921560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.921924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.921954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.922337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.922367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.922723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.922752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.923078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.923107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.923501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.923533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.923899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.923929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.924293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.924325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.924706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.924735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.925102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.925132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.925505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.925536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.925891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.925919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.926286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.926317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.926679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.926708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.926933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.926965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.927329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.927359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.927729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.927759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.928113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.928141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.928359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.928389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.928777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.928806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.929188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.929219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.929585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.929616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.929993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.930022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.930399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.930429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.930803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.930831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.931095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.931124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.931409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.931439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.931846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.931876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.932249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.932280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.932433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.932461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.932848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.932876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.933236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.316 [2024-11-29 13:16:58.933267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.316 qpair failed and we were unable to recover it. 00:32:56.316 [2024-11-29 13:16:58.933495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.933523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.933882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.933911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.934232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.934263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.934655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.934684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.934869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.934897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.935278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.935308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.935654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.935683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.936020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.936050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.936291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.936320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.936737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.936767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.937011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.937040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.937419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.937455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.937807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.937838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.938222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.938252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.938611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.938647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.939006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.939035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.939419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.939450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.939646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.939676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.940042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.940071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.940294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.940324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.940555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.940583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.940962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.940993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.941312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.941343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.941786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.941815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.942199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.942230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.942595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.942634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.942955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.942986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.943356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.943388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.943621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.943651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.944048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.944077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.944427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.944458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.944829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.944858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.945229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.945259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.945633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.945663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.946012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.946041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.946396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.946426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.946776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.946806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 [2024-11-29 13:16:58.947194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.947235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.317 qpair failed and we were unable to recover it. 00:32:56.317 Malloc0 00:32:56.317 [2024-11-29 13:16:58.947666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.317 [2024-11-29 13:16:58.947697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.948036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.948067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.948413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.948445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:56.318 [2024-11-29 13:16:58.948794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.318 [2024-11-29 13:16:58.948825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:56.318 [2024-11-29 13:16:58.949063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.949093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.949518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.949549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.949926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.949957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.950338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.950378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.950744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.950774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.951121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.951151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.951542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.951572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.951940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.951970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.952230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.952266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.952654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.952683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.953046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.953075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.953514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.953546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.953889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.953918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.954276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.954307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.954534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.954563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.954593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.318 [2024-11-29 13:16:58.954804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.954834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.955127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.955157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.955386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.955415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.955735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.955774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.956025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.956055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.956317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.956347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.318 [2024-11-29 13:16:58.956741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.318 [2024-11-29 13:16:58.956772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.318 qpair failed and we were unable to recover it. 00:32:56.584 [2024-11-29 13:16:58.957020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.584 [2024-11-29 13:16:58.957053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.584 qpair failed and we were unable to recover it. 00:32:56.584 [2024-11-29 13:16:58.957339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.584 [2024-11-29 13:16:58.957371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.584 qpair failed and we were unable to recover it. 00:32:56.584 [2024-11-29 13:16:58.957620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.584 [2024-11-29 13:16:58.957650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.584 qpair failed and we were unable to recover it. 00:32:56.584 [2024-11-29 13:16:58.957866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.584 [2024-11-29 13:16:58.957896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.584 qpair failed and we were unable to recover it. 00:32:56.584 [2024-11-29 13:16:58.958201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.584 [2024-11-29 13:16:58.958233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.584 qpair failed and we were unable to recover it. 00:32:56.584 [2024-11-29 13:16:58.958605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.584 [2024-11-29 13:16:58.958635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.584 qpair failed and we were unable to recover it. 00:32:56.584 [2024-11-29 13:16:58.958984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.584 [2024-11-29 13:16:58.959016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.584 qpair failed and we were unable to recover it. 00:32:56.584 [2024-11-29 13:16:58.959370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.584 [2024-11-29 13:16:58.959401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.584 qpair failed and we were unable to recover it. 00:32:56.584 [2024-11-29 13:16:58.959778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.959809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.960176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.960208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.960569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.960598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.960821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.960850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.961217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.961248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.961372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.961412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.961799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.961830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.962188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.962219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.962602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.962632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.962881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.962912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.963231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.963262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.963528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.963557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.585 [2024-11-29 13:16:58.963934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.963965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:56.585 [2024-11-29 13:16:58.964260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.964294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.585 [2024-11-29 13:16:58.964549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.964583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:56.585 [2024-11-29 13:16:58.964935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.964967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.965372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.965403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.965664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.965700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.966046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.966077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.966417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.966449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.966789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.966820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.967201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.967234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.967620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.967648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.968012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.968042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.968389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.968420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.968780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.968809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.969149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.969220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.969473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.969502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.969753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.969784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.970153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.970195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.970583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.970612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.970974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.971003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.971384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.971415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.971770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.971800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.972057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.972087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.972505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.972535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.585 [2024-11-29 13:16:58.972906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.585 [2024-11-29 13:16:58.972935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.585 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.973153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.973191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.973577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.973607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.973973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.974006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.974247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.974278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.974656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.974688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.975059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.975088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.975455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.975487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.975717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.975747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.586 [2024-11-29 13:16:58.976101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.976132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:56.586 [2024-11-29 13:16:58.976372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.976405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.586 [2024-11-29 13:16:58.976639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.976670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:56.586 [2024-11-29 13:16:58.977033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.977065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.977499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.977530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.977746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.977775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.978145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.978182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.978535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.978565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.978938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.978967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.979240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.979271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.979642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.979672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.980046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.980075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.980418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.980449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.980800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.980830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.981203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.981235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.981603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.981633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.981996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.982026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.982253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.982284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.982562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.982592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.982827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.982857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.983126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.983156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.983378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.983409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.983788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.983819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.984188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.984220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.984438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.984474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.984820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.984850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.985200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.985231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.586 [2024-11-29 13:16:58.985580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.586 [2024-11-29 13:16:58.985611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.586 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.985851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.985882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.986249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.986280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.986511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.986541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.986938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.986968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.987218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.987250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.987599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.987629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.587 [2024-11-29 13:16:58.987976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.988008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.587 [2024-11-29 13:16:58.988377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.988410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.587 [2024-11-29 13:16:58.988782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:56.587 [2024-11-29 13:16:58.988813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.989069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.989099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.989372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.989403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.989753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.989784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.990006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.990037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.990373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.990405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.990781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.990812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.991025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.991056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.991328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.991360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.991715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.991745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.992112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.992143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.992496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.992528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.992752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.992785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.993103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.993134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.993398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.993431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.993777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.993807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.994191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.994223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.994599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.587 [2024-11-29 13:16:58.994630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef0c0 with addr=10.0.0.2, port=4420 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:58.995003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.587 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.587 13:16:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:56.587 13:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.587 13:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:56.587 [2024-11-29 13:16:59.005940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.587 [2024-11-29 13:16:59.006097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.587 [2024-11-29 13:16:59.006149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.587 [2024-11-29 13:16:59.006199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.587 [2024-11-29 13:16:59.006221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.587 [2024-11-29 13:16:59.006276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 13:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.587 13:16:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1116037 00:32:56.587 [2024-11-29 13:16:59.015657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.587 [2024-11-29 13:16:59.015754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.587 [2024-11-29 13:16:59.015788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.587 [2024-11-29 13:16:59.015804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.587 [2024-11-29 13:16:59.015818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.587 [2024-11-29 13:16:59.015853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.587 qpair failed and we were unable to recover it. 00:32:56.587 [2024-11-29 13:16:59.025692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.587 [2024-11-29 13:16:59.025772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.587 [2024-11-29 13:16:59.025797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.587 [2024-11-29 13:16:59.025808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.025818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.025841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.035744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.035821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.035839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.035846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.035853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.035870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.045722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.045792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.045810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.045817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.045823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.045840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.055726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.055793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.055811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.055819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.055826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.055842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.065751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.065811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.065835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.065843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.065849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.065866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.075768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.075844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.075862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.075869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.075875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.075892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.085843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.085916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.085934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.085941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.085947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.085964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.095845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.095927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.095944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.095951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.095957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.095974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.105866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.105925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.105944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.105951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.105963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.105980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.115765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.115849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.115870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.115878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.115885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.115903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.125829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.125903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.125921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.125929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.125935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.125953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.135948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.136015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.136033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.136040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.136047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.136063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.145991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.146055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.146073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.146080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.146086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.588 [2024-11-29 13:16:59.146103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.588 qpair failed and we were unable to recover it. 00:32:56.588 [2024-11-29 13:16:59.156053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.588 [2024-11-29 13:16:59.156132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.588 [2024-11-29 13:16:59.156150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.588 [2024-11-29 13:16:59.156161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.588 [2024-11-29 13:16:59.156169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.156186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.166052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.166122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.166139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.166147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.166153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.166176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.176048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.176147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.176168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.176176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.176182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.176199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.186098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.186168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.186185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.186192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.186199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.186216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.196115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.196180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.196203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.196211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.196217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.196234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.206187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.206264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.206282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.206289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.206296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.206313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.216339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.216413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.216429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.216437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.216443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.216459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.226233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.226300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.226316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.226324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.226330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.226347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.236179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.236249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.236266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.236274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.236285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.236302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.246291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.246367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.246383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.246390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.246397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.246413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.589 [2024-11-29 13:16:59.256317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.589 [2024-11-29 13:16:59.256379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.589 [2024-11-29 13:16:59.256397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.589 [2024-11-29 13:16:59.256404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.589 [2024-11-29 13:16:59.256410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.589 [2024-11-29 13:16:59.256427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.589 qpair failed and we were unable to recover it. 00:32:56.853 [2024-11-29 13:16:59.266211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.853 [2024-11-29 13:16:59.266275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.853 [2024-11-29 13:16:59.266296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.853 [2024-11-29 13:16:59.266303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.853 [2024-11-29 13:16:59.266310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.853 [2024-11-29 13:16:59.266328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-11-29 13:16:59.276369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.853 [2024-11-29 13:16:59.276436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.853 [2024-11-29 13:16:59.276456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.853 [2024-11-29 13:16:59.276463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.853 [2024-11-29 13:16:59.276470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.853 [2024-11-29 13:16:59.276487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-11-29 13:16:59.286410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.853 [2024-11-29 13:16:59.286486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.853 [2024-11-29 13:16:59.286504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.853 [2024-11-29 13:16:59.286512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.853 [2024-11-29 13:16:59.286519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.853 [2024-11-29 13:16:59.286535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-11-29 13:16:59.296433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.853 [2024-11-29 13:16:59.296494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.853 [2024-11-29 13:16:59.296511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.853 [2024-11-29 13:16:59.296518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.853 [2024-11-29 13:16:59.296524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.853 [2024-11-29 13:16:59.296541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-11-29 13:16:59.306465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.853 [2024-11-29 13:16:59.306528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.853 [2024-11-29 13:16:59.306545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.853 [2024-11-29 13:16:59.306552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.853 [2024-11-29 13:16:59.306559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.853 [2024-11-29 13:16:59.306575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-11-29 13:16:59.316491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.853 [2024-11-29 13:16:59.316556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.853 [2024-11-29 13:16:59.316572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.853 [2024-11-29 13:16:59.316579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.853 [2024-11-29 13:16:59.316586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.853 [2024-11-29 13:16:59.316601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-11-29 13:16:59.326525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.853 [2024-11-29 13:16:59.326603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.853 [2024-11-29 13:16:59.326626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.853 [2024-11-29 13:16:59.326634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.853 [2024-11-29 13:16:59.326640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.853 [2024-11-29 13:16:59.326657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.853 qpair failed and we were unable to recover it. 00:32:56.853 [2024-11-29 13:16:59.336551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.853 [2024-11-29 13:16:59.336622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.853 [2024-11-29 13:16:59.336640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.853 [2024-11-29 13:16:59.336649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.853 [2024-11-29 13:16:59.336655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.336672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.346546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.346607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.346626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.346634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.346641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.346658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.356587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.356655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.356675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.356682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.356689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.356707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.366630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.366710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.366727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.366735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.366748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.366764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.376695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.376752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.376769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.376777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.376784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.376800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.386680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.386747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.386763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.386771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.386777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.386793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.396692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.396757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.396774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.396781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.396788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.396804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.406794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.406860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.406876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.406884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.406890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.406906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.416764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.416840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.416878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.416889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.416896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.416921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.426775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.426838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.426876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.426886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.426893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.426918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.436881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.436956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.436987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.436995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.437001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.437023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.446913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.446987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.447006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.447013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.447019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.447038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.456890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.456952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.456976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.456983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.456990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.854 [2024-11-29 13:16:59.457008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.854 qpair failed and we were unable to recover it. 00:32:56.854 [2024-11-29 13:16:59.466911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.854 [2024-11-29 13:16:59.466977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.854 [2024-11-29 13:16:59.466995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.854 [2024-11-29 13:16:59.467002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.854 [2024-11-29 13:16:59.467009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.855 [2024-11-29 13:16:59.467026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-11-29 13:16:59.476981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.855 [2024-11-29 13:16:59.477054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.855 [2024-11-29 13:16:59.477077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.855 [2024-11-29 13:16:59.477084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.855 [2024-11-29 13:16:59.477091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.855 [2024-11-29 13:16:59.477110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-11-29 13:16:59.487010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.855 [2024-11-29 13:16:59.487078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.855 [2024-11-29 13:16:59.487096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.855 [2024-11-29 13:16:59.487103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.855 [2024-11-29 13:16:59.487110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.855 [2024-11-29 13:16:59.487128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-11-29 13:16:59.497017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.855 [2024-11-29 13:16:59.497080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.855 [2024-11-29 13:16:59.497098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.855 [2024-11-29 13:16:59.497106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.855 [2024-11-29 13:16:59.497119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.855 [2024-11-29 13:16:59.497136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-11-29 13:16:59.507049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.855 [2024-11-29 13:16:59.507129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.855 [2024-11-29 13:16:59.507146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.855 [2024-11-29 13:16:59.507153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.855 [2024-11-29 13:16:59.507165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.855 [2024-11-29 13:16:59.507183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-11-29 13:16:59.517091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.855 [2024-11-29 13:16:59.517164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.855 [2024-11-29 13:16:59.517181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.855 [2024-11-29 13:16:59.517189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.855 [2024-11-29 13:16:59.517195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.855 [2024-11-29 13:16:59.517211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.855 qpair failed and we were unable to recover it. 00:32:56.855 [2024-11-29 13:16:59.527138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:56.855 [2024-11-29 13:16:59.527215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:56.855 [2024-11-29 13:16:59.527233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:56.855 [2024-11-29 13:16:59.527240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.855 [2024-11-29 13:16:59.527247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:56.855 [2024-11-29 13:16:59.527264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:56.855 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.537139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.537207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.537225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.537233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.537239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.537256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.547175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.547235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.547254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.547262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.547269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.547287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.557222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.557292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.557309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.557317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.557323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.557340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.567274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.567345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.567363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.567370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.567377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.567394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.577250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.577312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.577329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.577337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.577344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.577361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.587275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.587337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.587359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.587367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.587373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.587390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.597344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.597412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.597429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.597436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.597443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.597459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.607464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.607530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.607547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.607554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.607561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.607578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.617399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.617458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.617475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.617482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.617489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.617505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.627398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.627465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.627482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.627490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.627502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.627518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.637471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.637550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.637567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.637574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.637580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.637597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.647541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.118 [2024-11-29 13:16:59.647621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.118 [2024-11-29 13:16:59.647638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.118 [2024-11-29 13:16:59.647646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.118 [2024-11-29 13:16:59.647652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.118 [2024-11-29 13:16:59.647669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.118 qpair failed and we were unable to recover it. 00:32:57.118 [2024-11-29 13:16:59.657513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.657569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.657586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.657594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.657600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.657617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.667493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.667562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.667579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.667586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.667593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.667609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.677592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.677660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.677682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.677690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.677696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.677714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.687628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.687696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.687714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.687721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.687728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.687745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.697614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.697674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.697691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.697699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.697705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.697722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.707639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.707707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.707724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.707732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.707738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.707754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.717668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.717735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.717756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.717764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.717770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.717786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.727608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.727681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.727699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.727706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.727713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.727728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.737742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.737807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.737824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.737831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.737838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.737854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.747785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.747879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.747896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.747904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.747910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.747927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.757796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.757876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.757914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.757924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.757937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.757962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.767824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.767906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.767944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.767954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.767961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.767986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.777846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.777933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.119 [2024-11-29 13:16:59.777970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.119 [2024-11-29 13:16:59.777980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.119 [2024-11-29 13:16:59.777987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.119 [2024-11-29 13:16:59.778012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.119 qpair failed and we were unable to recover it. 00:32:57.119 [2024-11-29 13:16:59.787864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.119 [2024-11-29 13:16:59.787932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.120 [2024-11-29 13:16:59.787953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.120 [2024-11-29 13:16:59.787961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.120 [2024-11-29 13:16:59.787968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.120 [2024-11-29 13:16:59.787986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.120 qpair failed and we were unable to recover it. 00:32:57.384 [2024-11-29 13:16:59.797916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.384 [2024-11-29 13:16:59.798028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.384 [2024-11-29 13:16:59.798046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.384 [2024-11-29 13:16:59.798054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.384 [2024-11-29 13:16:59.798061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.384 [2024-11-29 13:16:59.798078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.384 qpair failed and we were unable to recover it. 00:32:57.384 [2024-11-29 13:16:59.807960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.384 [2024-11-29 13:16:59.808039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.384 [2024-11-29 13:16:59.808057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.384 [2024-11-29 13:16:59.808065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.384 [2024-11-29 13:16:59.808072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.384 [2024-11-29 13:16:59.808088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.384 qpair failed and we were unable to recover it. 00:32:57.384 [2024-11-29 13:16:59.817939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.384 [2024-11-29 13:16:59.818002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.384 [2024-11-29 13:16:59.818022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.384 [2024-11-29 13:16:59.818030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.384 [2024-11-29 13:16:59.818038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.384 [2024-11-29 13:16:59.818055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.384 qpair failed and we were unable to recover it. 00:32:57.384 [2024-11-29 13:16:59.827991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.384 [2024-11-29 13:16:59.828054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.384 [2024-11-29 13:16:59.828073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.384 [2024-11-29 13:16:59.828081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.384 [2024-11-29 13:16:59.828087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.384 [2024-11-29 13:16:59.828106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.384 qpair failed and we were unable to recover it. 00:32:57.384 [2024-11-29 13:16:59.838005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.384 [2024-11-29 13:16:59.838070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.384 [2024-11-29 13:16:59.838088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.384 [2024-11-29 13:16:59.838096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.384 [2024-11-29 13:16:59.838103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.384 [2024-11-29 13:16:59.838120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.384 qpair failed and we were unable to recover it. 00:32:57.384 [2024-11-29 13:16:59.848105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.384 [2024-11-29 13:16:59.848187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.384 [2024-11-29 13:16:59.848211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.384 [2024-11-29 13:16:59.848219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.384 [2024-11-29 13:16:59.848225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.384 [2024-11-29 13:16:59.848242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.384 qpair failed and we were unable to recover it. 00:32:57.384 [2024-11-29 13:16:59.858084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.384 [2024-11-29 13:16:59.858154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.384 [2024-11-29 13:16:59.858181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.384 [2024-11-29 13:16:59.858188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.384 [2024-11-29 13:16:59.858195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.858212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.868099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.868167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.868185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.868193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.868200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.868216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.878154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.878227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.878245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.878253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.878259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.878276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.888196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.888269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.888287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.888294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.888306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.888323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.898194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.898269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.898287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.898295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.898301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.898318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.908227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.908288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.908310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.908317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.908324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.908342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.918264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.918342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.918360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.918367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.918373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.918390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.928329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.928401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.928419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.928426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.928432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.928449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.938330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.938391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.938410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.938417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.938423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.938441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.948317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.948373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.948390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.948398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.948404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.385 [2024-11-29 13:16:59.948421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.385 qpair failed and we were unable to recover it. 00:32:57.385 [2024-11-29 13:16:59.958393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.385 [2024-11-29 13:16:59.958485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.385 [2024-11-29 13:16:59.958502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.385 [2024-11-29 13:16:59.958510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.385 [2024-11-29 13:16:59.958516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:16:59.958532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:16:59.968440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:16:59.968517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:16:59.968534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:16:59.968541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:16:59.968548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:16:59.968565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:16:59.978470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:16:59.978530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:16:59.978553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:16:59.978560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:16:59.978567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:16:59.978583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:16:59.988364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:16:59.988433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:16:59.988455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:16:59.988463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:16:59.988469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:16:59.988488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:16:59.998426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:16:59.998512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:16:59.998531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:16:59.998540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:16:59.998547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:16:59.998565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:17:00.008627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:17:00.008703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:17:00.008724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:17:00.008732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:17:00.008738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:17:00.008756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:17:00.018591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:17:00.018660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:17:00.018678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:17:00.018686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:17:00.018698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:17:00.018716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:17:00.028684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:17:00.028781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:17:00.028799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:17:00.028807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:17:00.028814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:17:00.028831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:17:00.038617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:17:00.038722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:17:00.038747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:17:00.038763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:17:00.038773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:17:00.038804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:17:00.048711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:17:00.048788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:17:00.048809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:17:00.048819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:17:00.048826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:17:00.048845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.386 [2024-11-29 13:17:00.058666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.386 [2024-11-29 13:17:00.058733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.386 [2024-11-29 13:17:00.058751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.386 [2024-11-29 13:17:00.058759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.386 [2024-11-29 13:17:00.058766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.386 [2024-11-29 13:17:00.058783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.386 qpair failed and we were unable to recover it. 00:32:57.650 [2024-11-29 13:17:00.068684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.650 [2024-11-29 13:17:00.068745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.650 [2024-11-29 13:17:00.068763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.650 [2024-11-29 13:17:00.068771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.650 [2024-11-29 13:17:00.068778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.650 [2024-11-29 13:17:00.068795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.650 qpair failed and we were unable to recover it. 00:32:57.650 [2024-11-29 13:17:00.078755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.650 [2024-11-29 13:17:00.078827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.650 [2024-11-29 13:17:00.078846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.650 [2024-11-29 13:17:00.078855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.650 [2024-11-29 13:17:00.078862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.650 [2024-11-29 13:17:00.078880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.650 qpair failed and we were unable to recover it. 00:32:57.650 [2024-11-29 13:17:00.088779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.650 [2024-11-29 13:17:00.088870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.650 [2024-11-29 13:17:00.088903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.650 [2024-11-29 13:17:00.088913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.650 [2024-11-29 13:17:00.088920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.650 [2024-11-29 13:17:00.088947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.650 qpair failed and we were unable to recover it. 00:32:57.650 [2024-11-29 13:17:00.098769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.650 [2024-11-29 13:17:00.098841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.650 [2024-11-29 13:17:00.098880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.650 [2024-11-29 13:17:00.098889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.650 [2024-11-29 13:17:00.098897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.650 [2024-11-29 13:17:00.098922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.650 qpair failed and we were unable to recover it. 00:32:57.650 [2024-11-29 13:17:00.108822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.650 [2024-11-29 13:17:00.108895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.650 [2024-11-29 13:17:00.108941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.650 [2024-11-29 13:17:00.108951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.650 [2024-11-29 13:17:00.108959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.650 [2024-11-29 13:17:00.108986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.650 qpair failed and we were unable to recover it. 00:32:57.650 [2024-11-29 13:17:00.118871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.650 [2024-11-29 13:17:00.118945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.650 [2024-11-29 13:17:00.118967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.650 [2024-11-29 13:17:00.118975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.650 [2024-11-29 13:17:00.118982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.119001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.128907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.128997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.129034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.129044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.651 [2024-11-29 13:17:00.129051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.129076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.138916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.138981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.139004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.139013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.651 [2024-11-29 13:17:00.139021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.139040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.148900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.148961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.148980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.148988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.651 [2024-11-29 13:17:00.149003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.149021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.159003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.159080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.159112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.159120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.651 [2024-11-29 13:17:00.159127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.159151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.169048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.169121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.169141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.169148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.651 [2024-11-29 13:17:00.169155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.169181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.179032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.179098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.179119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.179126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.651 [2024-11-29 13:17:00.179133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.179151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.189052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.189131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.189150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.189164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.651 [2024-11-29 13:17:00.189172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.189191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.199095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.199173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.199192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.199200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.651 [2024-11-29 13:17:00.199207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.199225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.209170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.209241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.209259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.209267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.651 [2024-11-29 13:17:00.209274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.651 [2024-11-29 13:17:00.209291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.651 qpair failed and we were unable to recover it. 00:32:57.651 [2024-11-29 13:17:00.219168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.651 [2024-11-29 13:17:00.219234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.651 [2024-11-29 13:17:00.219251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.651 [2024-11-29 13:17:00.219259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.219267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.219283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.229213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.229284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.229301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.229309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.229316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.229333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.239231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.239312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.239334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.239341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.239348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.239365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.249281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.249351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.249373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.249381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.249388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.249406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.259313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.259380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.259397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.259405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.259411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.259428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.269330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.269391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.269408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.269416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.269422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.269438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.279322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.279424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.279441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.279448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.279461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.279478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.289412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.289482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.289500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.289507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.289513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.289530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.299428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.299526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.299542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.299550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.299556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.299573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.309478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.309544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.309561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.309568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.309575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.309592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.652 [2024-11-29 13:17:00.319493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.652 [2024-11-29 13:17:00.319559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.652 [2024-11-29 13:17:00.319576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.652 [2024-11-29 13:17:00.319583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.652 [2024-11-29 13:17:00.319590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.652 [2024-11-29 13:17:00.319606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.652 qpair failed and we were unable to recover it. 00:32:57.915 [2024-11-29 13:17:00.329555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.915 [2024-11-29 13:17:00.329637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.915 [2024-11-29 13:17:00.329655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.915 [2024-11-29 13:17:00.329662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.915 [2024-11-29 13:17:00.329670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.915 [2024-11-29 13:17:00.329687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.915 qpair failed and we were unable to recover it. 00:32:57.915 [2024-11-29 13:17:00.339556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.915 [2024-11-29 13:17:00.339630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.915 [2024-11-29 13:17:00.339647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.915 [2024-11-29 13:17:00.339655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.915 [2024-11-29 13:17:00.339661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.915 [2024-11-29 13:17:00.339678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.915 qpair failed and we were unable to recover it. 00:32:57.915 [2024-11-29 13:17:00.349554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.915 [2024-11-29 13:17:00.349621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.915 [2024-11-29 13:17:00.349639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.915 [2024-11-29 13:17:00.349646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.915 [2024-11-29 13:17:00.349653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.915 [2024-11-29 13:17:00.349669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.915 qpair failed and we were unable to recover it. 00:32:57.915 [2024-11-29 13:17:00.359624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.915 [2024-11-29 13:17:00.359692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.915 [2024-11-29 13:17:00.359710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.915 [2024-11-29 13:17:00.359718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.915 [2024-11-29 13:17:00.359724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.915 [2024-11-29 13:17:00.359741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.915 qpair failed and we were unable to recover it. 00:32:57.915 [2024-11-29 13:17:00.369638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.915 [2024-11-29 13:17:00.369715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.915 [2024-11-29 13:17:00.369738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.915 [2024-11-29 13:17:00.369745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.915 [2024-11-29 13:17:00.369752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.915 [2024-11-29 13:17:00.369769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.915 qpair failed and we were unable to recover it. 00:32:57.915 [2024-11-29 13:17:00.379650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.915 [2024-11-29 13:17:00.379722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.915 [2024-11-29 13:17:00.379739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.915 [2024-11-29 13:17:00.379746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.915 [2024-11-29 13:17:00.379753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.915 [2024-11-29 13:17:00.379769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.915 qpair failed and we were unable to recover it. 00:32:57.915 [2024-11-29 13:17:00.389659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.915 [2024-11-29 13:17:00.389717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.915 [2024-11-29 13:17:00.389735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.915 [2024-11-29 13:17:00.389743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.389749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.389765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.399736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.399799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.399817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.399825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.399831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.399847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.409750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.409822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.409839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.409847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.409860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.409877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.419745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.419818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.419857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.419866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.419875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.419899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.429680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.429744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.429764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.429773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.429779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.429797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.439842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.439912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.439930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.439938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.439945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.439963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.449894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.449976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.450015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.450025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.450033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.450058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.459860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.459922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.459945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.459954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.459961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.459980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.469929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.470033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.470071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.470082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.470090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.470114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.479944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.480011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.480036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.480044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.916 [2024-11-29 13:17:00.480051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.916 [2024-11-29 13:17:00.480071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.916 qpair failed and we were unable to recover it. 00:32:57.916 [2024-11-29 13:17:00.490016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.916 [2024-11-29 13:17:00.490093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.916 [2024-11-29 13:17:00.490112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.916 [2024-11-29 13:17:00.490120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.490127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.490144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.499987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.500058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.500081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.500089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.500095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.500112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.510023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.510089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.510107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.510114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.510120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.510137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.519978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.520082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.520100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.520108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.520114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.520131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.530186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.530265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.530283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.530291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.530297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.530314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.540144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.540223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.540240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.540248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.540260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.540277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.550132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.550202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.550219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.550227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.550234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.550250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.560196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.560280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.560298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.560305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.560312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.560329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.570267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.570339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.570358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.570365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.570372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.570388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.580271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.580337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.580354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.580362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.580368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.580384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:57.917 [2024-11-29 13:17:00.590291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:57.917 [2024-11-29 13:17:00.590375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:57.917 [2024-11-29 13:17:00.590392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:57.917 [2024-11-29 13:17:00.590400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:57.917 [2024-11-29 13:17:00.590406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:57.917 [2024-11-29 13:17:00.590422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:57.917 qpair failed and we were unable to recover it. 00:32:58.180 [2024-11-29 13:17:00.600338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.180 [2024-11-29 13:17:00.600403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.180 [2024-11-29 13:17:00.600421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.180 [2024-11-29 13:17:00.600429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.600435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.600452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.610392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.610471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.181 [2024-11-29 13:17:00.610490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.181 [2024-11-29 13:17:00.610497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.610504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.610521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.620397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.620492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.181 [2024-11-29 13:17:00.620508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.181 [2024-11-29 13:17:00.620515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.620522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.620538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.630417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.630477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.181 [2024-11-29 13:17:00.630499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.181 [2024-11-29 13:17:00.630507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.630513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.630530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.640470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.640539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.181 [2024-11-29 13:17:00.640556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.181 [2024-11-29 13:17:00.640564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.640570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.640586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.650498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.650566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.181 [2024-11-29 13:17:00.650584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.181 [2024-11-29 13:17:00.650591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.650598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.650614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.660521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.660583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.181 [2024-11-29 13:17:00.660602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.181 [2024-11-29 13:17:00.660609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.660616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.660632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.670577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.670641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.181 [2024-11-29 13:17:00.670658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.181 [2024-11-29 13:17:00.670666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.670672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.670694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.680577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.680644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.181 [2024-11-29 13:17:00.680662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.181 [2024-11-29 13:17:00.680669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.680676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.680693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.690626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.690703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.181 [2024-11-29 13:17:00.690720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.181 [2024-11-29 13:17:00.690727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.181 [2024-11-29 13:17:00.690734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.181 [2024-11-29 13:17:00.690750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.181 qpair failed and we were unable to recover it. 00:32:58.181 [2024-11-29 13:17:00.700618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.181 [2024-11-29 13:17:00.700683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.700705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.700713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.700719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.700737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.710620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.182 [2024-11-29 13:17:00.710690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.710709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.710716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.710722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.710739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.720655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.182 [2024-11-29 13:17:00.720768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.720787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.720794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.720801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.720818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.730747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.182 [2024-11-29 13:17:00.730814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.730831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.730839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.730845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.730864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.740734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.182 [2024-11-29 13:17:00.740808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.740826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.740833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.740840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.740856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.750766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.182 [2024-11-29 13:17:00.750831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.750849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.750857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.750864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.750881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.760806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.182 [2024-11-29 13:17:00.760915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.760936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.760944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.760951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.760966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.770847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.182 [2024-11-29 13:17:00.770918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.770936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.770943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.770950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.770966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.780783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.182 [2024-11-29 13:17:00.780841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.780875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.780884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.780891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.780913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.790861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.182 [2024-11-29 13:17:00.790925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.182 [2024-11-29 13:17:00.790945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.182 [2024-11-29 13:17:00.790953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.182 [2024-11-29 13:17:00.790959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.182 [2024-11-29 13:17:00.790976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.182 qpair failed and we were unable to recover it. 00:32:58.182 [2024-11-29 13:17:00.800890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.183 [2024-11-29 13:17:00.800950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.183 [2024-11-29 13:17:00.800967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.183 [2024-11-29 13:17:00.800974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.183 [2024-11-29 13:17:00.800981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.183 [2024-11-29 13:17:00.801002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.183 qpair failed and we were unable to recover it. 00:32:58.183 [2024-11-29 13:17:00.810843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.183 [2024-11-29 13:17:00.810911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.183 [2024-11-29 13:17:00.810927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.183 [2024-11-29 13:17:00.810935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.183 [2024-11-29 13:17:00.810941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.183 [2024-11-29 13:17:00.810957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.183 qpair failed and we were unable to recover it. 00:32:58.183 [2024-11-29 13:17:00.820891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.183 [2024-11-29 13:17:00.820942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.183 [2024-11-29 13:17:00.820958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.183 [2024-11-29 13:17:00.820966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.183 [2024-11-29 13:17:00.820973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.183 [2024-11-29 13:17:00.820988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.183 qpair failed and we were unable to recover it. 00:32:58.183 [2024-11-29 13:17:00.830961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.183 [2024-11-29 13:17:00.831019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.183 [2024-11-29 13:17:00.831034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.183 [2024-11-29 13:17:00.831041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.183 [2024-11-29 13:17:00.831048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.183 [2024-11-29 13:17:00.831063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.183 qpair failed and we were unable to recover it. 00:32:58.183 [2024-11-29 13:17:00.840990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.183 [2024-11-29 13:17:00.841048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.183 [2024-11-29 13:17:00.841062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.183 [2024-11-29 13:17:00.841070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.183 [2024-11-29 13:17:00.841076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.183 [2024-11-29 13:17:00.841091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.183 qpair failed and we were unable to recover it. 00:32:58.183 [2024-11-29 13:17:00.851038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.183 [2024-11-29 13:17:00.851092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.183 [2024-11-29 13:17:00.851108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.183 [2024-11-29 13:17:00.851115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.183 [2024-11-29 13:17:00.851122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.183 [2024-11-29 13:17:00.851137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.183 qpair failed and we were unable to recover it. 00:32:58.446 [2024-11-29 13:17:00.861009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.446 [2024-11-29 13:17:00.861064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.446 [2024-11-29 13:17:00.861078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.446 [2024-11-29 13:17:00.861086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.446 [2024-11-29 13:17:00.861092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.446 [2024-11-29 13:17:00.861107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.446 qpair failed and we were unable to recover it. 00:32:58.446 [2024-11-29 13:17:00.871037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.446 [2024-11-29 13:17:00.871091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.446 [2024-11-29 13:17:00.871106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.446 [2024-11-29 13:17:00.871114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.446 [2024-11-29 13:17:00.871120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.446 [2024-11-29 13:17:00.871134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.446 qpair failed and we were unable to recover it. 00:32:58.446 [2024-11-29 13:17:00.881112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.446 [2024-11-29 13:17:00.881190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.446 [2024-11-29 13:17:00.881205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.446 [2024-11-29 13:17:00.881213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.446 [2024-11-29 13:17:00.881219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.446 [2024-11-29 13:17:00.881233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.891132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.891198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.891217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.891224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.891230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.447 [2024-11-29 13:17:00.891245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.901099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.901152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.901171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.901178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.901185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.447 [2024-11-29 13:17:00.901199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.911177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.911246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.911261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.911268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.911274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.447 [2024-11-29 13:17:00.911288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.921213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.921300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.921314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.921321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.921327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.447 [2024-11-29 13:17:00.921341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.931231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.931288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.931302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.931309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.931315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.447 [2024-11-29 13:17:00.931332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.941193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.941239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.941254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.941261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.941267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.447 [2024-11-29 13:17:00.941281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.951222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.951271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.951285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.951291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.951298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.447 [2024-11-29 13:17:00.951312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.961253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.961300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.961314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.961321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.961328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.447 [2024-11-29 13:17:00.961341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.971325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.971375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.971389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.971396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.971402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.447 [2024-11-29 13:17:00.971416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.447 qpair failed and we were unable to recover it. 00:32:58.447 [2024-11-29 13:17:00.981293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.447 [2024-11-29 13:17:00.981338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.447 [2024-11-29 13:17:00.981352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.447 [2024-11-29 13:17:00.981359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.447 [2024-11-29 13:17:00.981365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:00.981379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:00.991349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:00.991399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:00.991413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:00.991420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:00.991426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:00.991439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.001304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:01.001354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:01.001367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:01.001374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:01.001381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:01.001395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.011434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:01.011488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:01.011502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:01.011510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:01.011516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:01.011530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.021388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:01.021439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:01.021455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:01.021463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:01.021469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:01.021482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.031468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:01.031540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:01.031554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:01.031561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:01.031567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:01.031580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.041461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:01.041507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:01.041520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:01.041527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:01.041534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:01.041547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.051520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:01.051569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:01.051582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:01.051589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:01.051595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:01.051609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.061506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:01.061554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:01.061568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:01.061576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:01.061582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:01.061604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.071526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:01.071572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:01.071586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:01.071593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:01.071599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:01.071612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.081553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.448 [2024-11-29 13:17:01.081610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.448 [2024-11-29 13:17:01.081623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.448 [2024-11-29 13:17:01.081630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.448 [2024-11-29 13:17:01.081636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.448 [2024-11-29 13:17:01.081649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.448 qpair failed and we were unable to recover it. 00:32:58.448 [2024-11-29 13:17:01.091613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.449 [2024-11-29 13:17:01.091662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.449 [2024-11-29 13:17:01.091676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.449 [2024-11-29 13:17:01.091683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.449 [2024-11-29 13:17:01.091689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.449 [2024-11-29 13:17:01.091702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.449 qpair failed and we were unable to recover it. 00:32:58.449 [2024-11-29 13:17:01.101620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.449 [2024-11-29 13:17:01.101668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.449 [2024-11-29 13:17:01.101682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.449 [2024-11-29 13:17:01.101689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.449 [2024-11-29 13:17:01.101695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.449 [2024-11-29 13:17:01.101709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.449 qpair failed and we were unable to recover it. 00:32:58.449 [2024-11-29 13:17:01.111672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.449 [2024-11-29 13:17:01.111777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.449 [2024-11-29 13:17:01.111790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.449 [2024-11-29 13:17:01.111796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.449 [2024-11-29 13:17:01.111803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.449 [2024-11-29 13:17:01.111817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.449 qpair failed and we were unable to recover it. 00:32:58.449 [2024-11-29 13:17:01.121648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.449 [2024-11-29 13:17:01.121692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.449 [2024-11-29 13:17:01.121705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.449 [2024-11-29 13:17:01.121712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.449 [2024-11-29 13:17:01.121718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.449 [2024-11-29 13:17:01.121731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.449 qpair failed and we were unable to recover it. 00:32:58.710 [2024-11-29 13:17:01.131739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.710 [2024-11-29 13:17:01.131791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.710 [2024-11-29 13:17:01.131804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.710 [2024-11-29 13:17:01.131811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.710 [2024-11-29 13:17:01.131817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.710 [2024-11-29 13:17:01.131831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.710 qpair failed and we were unable to recover it. 00:32:58.710 [2024-11-29 13:17:01.141599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.710 [2024-11-29 13:17:01.141649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.710 [2024-11-29 13:17:01.141662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.710 [2024-11-29 13:17:01.141669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.710 [2024-11-29 13:17:01.141675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.710 [2024-11-29 13:17:01.141688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.710 qpair failed and we were unable to recover it. 00:32:58.710 [2024-11-29 13:17:01.151777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.710 [2024-11-29 13:17:01.151824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.710 [2024-11-29 13:17:01.151841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.710 [2024-11-29 13:17:01.151848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.710 [2024-11-29 13:17:01.151854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.710 [2024-11-29 13:17:01.151868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.710 qpair failed and we were unable to recover it. 00:32:58.710 [2024-11-29 13:17:01.161753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.710 [2024-11-29 13:17:01.161796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.710 [2024-11-29 13:17:01.161810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.710 [2024-11-29 13:17:01.161817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.710 [2024-11-29 13:17:01.161823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.710 [2024-11-29 13:17:01.161836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.710 qpair failed and we were unable to recover it. 00:32:58.710 [2024-11-29 13:17:01.171824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.710 [2024-11-29 13:17:01.171877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.710 [2024-11-29 13:17:01.171890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.710 [2024-11-29 13:17:01.171897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.710 [2024-11-29 13:17:01.171904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.710 [2024-11-29 13:17:01.171917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.710 qpair failed and we were unable to recover it. 00:32:58.710 [2024-11-29 13:17:01.181790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.181839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.181864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.181872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.181879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.181898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.191878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.191928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.191952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.191961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.191968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.191992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.201853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.201903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.201928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.201936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.201943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.201962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.211951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.212007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.212032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.212041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.212047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.212066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.221825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.221872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.221886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.221893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.221900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.221915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.231998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.232091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.232104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.232111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.232118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.232131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.241976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.242023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.242037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.242044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.242050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.242063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.252085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.252145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.252161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.252169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.252175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.252189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.262045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.262096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.262109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.262116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.262122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.262136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.272115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.272187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.272201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.272208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.272214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.272228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.282117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.282165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.282182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.282189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.282195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.282209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.292201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.292250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.292264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.292271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.292277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.292291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.302140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.302193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.711 [2024-11-29 13:17:01.302207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.711 [2024-11-29 13:17:01.302214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.711 [2024-11-29 13:17:01.302220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.711 [2024-11-29 13:17:01.302234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.711 qpair failed and we were unable to recover it. 00:32:58.711 [2024-11-29 13:17:01.312191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.711 [2024-11-29 13:17:01.312238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.712 [2024-11-29 13:17:01.312251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.712 [2024-11-29 13:17:01.312258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.712 [2024-11-29 13:17:01.312264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.712 [2024-11-29 13:17:01.312278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.712 qpair failed and we were unable to recover it. 00:32:58.712 [2024-11-29 13:17:01.322235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.712 [2024-11-29 13:17:01.322282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.712 [2024-11-29 13:17:01.322295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.712 [2024-11-29 13:17:01.322302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.712 [2024-11-29 13:17:01.322308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.712 [2024-11-29 13:17:01.322325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.712 qpair failed and we were unable to recover it. 00:32:58.712 [2024-11-29 13:17:01.332276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.712 [2024-11-29 13:17:01.332326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.712 [2024-11-29 13:17:01.332339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.712 [2024-11-29 13:17:01.332346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.712 [2024-11-29 13:17:01.332352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.712 [2024-11-29 13:17:01.332366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.712 qpair failed and we were unable to recover it. 00:32:58.712 [2024-11-29 13:17:01.342245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.712 [2024-11-29 13:17:01.342290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.712 [2024-11-29 13:17:01.342303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.712 [2024-11-29 13:17:01.342310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.712 [2024-11-29 13:17:01.342316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.712 [2024-11-29 13:17:01.342330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.712 qpair failed and we were unable to recover it. 00:32:58.712 [2024-11-29 13:17:01.352289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.712 [2024-11-29 13:17:01.352331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.712 [2024-11-29 13:17:01.352344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.712 [2024-11-29 13:17:01.352351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.712 [2024-11-29 13:17:01.352358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.712 [2024-11-29 13:17:01.352371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.712 qpair failed and we were unable to recover it. 00:32:58.712 [2024-11-29 13:17:01.362330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.712 [2024-11-29 13:17:01.362373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.712 [2024-11-29 13:17:01.362387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.712 [2024-11-29 13:17:01.362393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.712 [2024-11-29 13:17:01.362400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.712 [2024-11-29 13:17:01.362413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.712 qpair failed and we were unable to recover it. 00:32:58.712 [2024-11-29 13:17:01.372305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.712 [2024-11-29 13:17:01.372361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.712 [2024-11-29 13:17:01.372375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.712 [2024-11-29 13:17:01.372382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.712 [2024-11-29 13:17:01.372388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.712 [2024-11-29 13:17:01.372401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.712 qpair failed and we were unable to recover it. 00:32:58.712 [2024-11-29 13:17:01.382296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.712 [2024-11-29 13:17:01.382338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.712 [2024-11-29 13:17:01.382351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.712 [2024-11-29 13:17:01.382358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.712 [2024-11-29 13:17:01.382364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.712 [2024-11-29 13:17:01.382377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.712 qpair failed and we were unable to recover it. 00:32:58.973 [2024-11-29 13:17:01.392287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.973 [2024-11-29 13:17:01.392330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.973 [2024-11-29 13:17:01.392343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.973 [2024-11-29 13:17:01.392350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.973 [2024-11-29 13:17:01.392357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.973 [2024-11-29 13:17:01.392370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.973 qpair failed and we were unable to recover it. 00:32:58.973 [2024-11-29 13:17:01.402436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.973 [2024-11-29 13:17:01.402482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.973 [2024-11-29 13:17:01.402495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.973 [2024-11-29 13:17:01.402502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.973 [2024-11-29 13:17:01.402508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.973 [2024-11-29 13:17:01.402521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.973 qpair failed and we were unable to recover it. 00:32:58.973 [2024-11-29 13:17:01.412508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.973 [2024-11-29 13:17:01.412562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.973 [2024-11-29 13:17:01.412578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.973 [2024-11-29 13:17:01.412585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.973 [2024-11-29 13:17:01.412591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.973 [2024-11-29 13:17:01.412604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.973 qpair failed and we were unable to recover it. 00:32:58.973 [2024-11-29 13:17:01.422475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.973 [2024-11-29 13:17:01.422537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.973 [2024-11-29 13:17:01.422551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.973 [2024-11-29 13:17:01.422558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.973 [2024-11-29 13:17:01.422564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.973 [2024-11-29 13:17:01.422577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.973 qpair failed and we were unable to recover it. 00:32:58.973 [2024-11-29 13:17:01.432484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.973 [2024-11-29 13:17:01.432529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.973 [2024-11-29 13:17:01.432542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.973 [2024-11-29 13:17:01.432549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.973 [2024-11-29 13:17:01.432555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.973 [2024-11-29 13:17:01.432568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.973 qpair failed and we were unable to recover it. 00:32:58.973 [2024-11-29 13:17:01.442522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.973 [2024-11-29 13:17:01.442570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.973 [2024-11-29 13:17:01.442584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.973 [2024-11-29 13:17:01.442591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.973 [2024-11-29 13:17:01.442597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.973 [2024-11-29 13:17:01.442611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.973 qpair failed and we were unable to recover it. 00:32:58.973 [2024-11-29 13:17:01.452616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.973 [2024-11-29 13:17:01.452662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.973 [2024-11-29 13:17:01.452676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.973 [2024-11-29 13:17:01.452682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.452689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.452706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.462584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.462625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.462639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.462646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.462652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.462665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.472625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.472669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.472682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.472689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.472695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.472709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.482653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.482699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.482714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.482721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.482727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.482741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.492716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.492770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.492783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.492790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.492797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.492810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.502683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.502727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.502741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.502748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.502755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.502769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.512742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.512791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.512805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.512811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.512818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.512831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.522743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.522790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.522803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.522810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.522817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.522831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.532832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.532884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.532897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.532905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.532911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.532925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.542771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.542815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.542832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.542839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.542846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.542860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.552836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.552881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.552895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.552902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.552908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.552922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.562842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.562897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.562922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.562931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.562938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.562957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.572942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.572992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.573007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.573014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.573021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.573036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.974 qpair failed and we were unable to recover it. 00:32:58.974 [2024-11-29 13:17:01.582906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.974 [2024-11-29 13:17:01.582953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.974 [2024-11-29 13:17:01.582967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.974 [2024-11-29 13:17:01.582975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.974 [2024-11-29 13:17:01.582981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.974 [2024-11-29 13:17:01.582999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.975 qpair failed and we were unable to recover it. 00:32:58.975 [2024-11-29 13:17:01.592938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.975 [2024-11-29 13:17:01.592992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.975 [2024-11-29 13:17:01.593007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.975 [2024-11-29 13:17:01.593014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.975 [2024-11-29 13:17:01.593022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.975 [2024-11-29 13:17:01.593040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.975 qpair failed and we were unable to recover it. 00:32:58.975 [2024-11-29 13:17:01.602980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.975 [2024-11-29 13:17:01.603030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.975 [2024-11-29 13:17:01.603044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.975 [2024-11-29 13:17:01.603051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.975 [2024-11-29 13:17:01.603057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.975 [2024-11-29 13:17:01.603071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.975 qpair failed and we were unable to recover it. 00:32:58.975 [2024-11-29 13:17:01.613032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.975 [2024-11-29 13:17:01.613086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.975 [2024-11-29 13:17:01.613099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.975 [2024-11-29 13:17:01.613107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.975 [2024-11-29 13:17:01.613113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.975 [2024-11-29 13:17:01.613126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.975 qpair failed and we were unable to recover it. 00:32:58.975 [2024-11-29 13:17:01.623032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.975 [2024-11-29 13:17:01.623087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.975 [2024-11-29 13:17:01.623101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.975 [2024-11-29 13:17:01.623108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.975 [2024-11-29 13:17:01.623114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.975 [2024-11-29 13:17:01.623128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.975 qpair failed and we were unable to recover it. 00:32:58.975 [2024-11-29 13:17:01.633039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.975 [2024-11-29 13:17:01.633079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.975 [2024-11-29 13:17:01.633093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.975 [2024-11-29 13:17:01.633100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.975 [2024-11-29 13:17:01.633107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.975 [2024-11-29 13:17:01.633120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.975 qpair failed and we were unable to recover it. 00:32:58.975 [2024-11-29 13:17:01.643081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:58.975 [2024-11-29 13:17:01.643127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:58.975 [2024-11-29 13:17:01.643140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:58.975 [2024-11-29 13:17:01.643147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:58.975 [2024-11-29 13:17:01.643154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:58.975 [2024-11-29 13:17:01.643172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:58.975 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.653155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.653216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.653230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.653237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.653243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.653257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.663142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.663190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.663205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.663212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.663218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.663232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.673124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.673172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.673189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.673196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.673202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.673217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.683071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.683116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.683131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.683138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.683145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.683163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.693262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.693316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.693330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.693337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.693344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.693358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.703244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.703316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.703329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.703336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.703343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.703356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.713141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.713190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.713204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.713211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.713217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.713235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.723196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.723244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.723258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.723265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.723271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.723284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.733373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.733429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.733442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.733449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.733455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.733469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.743324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.743363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.743376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.743383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.743389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.743403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.753252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.753299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.753313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.753320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.753326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.753340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.763395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.236 [2024-11-29 13:17:01.763443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.236 [2024-11-29 13:17:01.763456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.236 [2024-11-29 13:17:01.763463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.236 [2024-11-29 13:17:01.763470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.236 [2024-11-29 13:17:01.763483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.236 qpair failed and we were unable to recover it. 00:32:59.236 [2024-11-29 13:17:01.773481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.773528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.773541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.773548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.773554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.773567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.783441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.783510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.783523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.783530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.783537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.783550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.793492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.793537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.793550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.793558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.793564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.793577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.803536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.803583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.803599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.803606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.803613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.803626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.813603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.813656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.813669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.813676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.813683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.813696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.823568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.823614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.823627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.823634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.823640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.823654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.833559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.833607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.833621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.833628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.833634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.833647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.843628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.843676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.843690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.843697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.843703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.843720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.853674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.853735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.853748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.853755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.853762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.853775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.863671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.863732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.863745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.863752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.863759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.863772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.873657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.873703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.873716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.873723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.873729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.873743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.883604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.883652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.883665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.883672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.883678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.883692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.893809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.893856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.893870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.237 [2024-11-29 13:17:01.893877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.237 [2024-11-29 13:17:01.893883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.237 [2024-11-29 13:17:01.893897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.237 qpair failed and we were unable to recover it. 00:32:59.237 [2024-11-29 13:17:01.903728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.237 [2024-11-29 13:17:01.903772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.237 [2024-11-29 13:17:01.903786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.238 [2024-11-29 13:17:01.903793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.238 [2024-11-29 13:17:01.903800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.238 [2024-11-29 13:17:01.903813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.238 qpair failed and we were unable to recover it. 00:32:59.499 [2024-11-29 13:17:01.913820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.499 [2024-11-29 13:17:01.913865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.499 [2024-11-29 13:17:01.913878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.499 [2024-11-29 13:17:01.913885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.499 [2024-11-29 13:17:01.913891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.499 [2024-11-29 13:17:01.913905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.499 qpair failed and we were unable to recover it. 00:32:59.499 [2024-11-29 13:17:01.923817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.499 [2024-11-29 13:17:01.923861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.499 [2024-11-29 13:17:01.923874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.499 [2024-11-29 13:17:01.923881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.499 [2024-11-29 13:17:01.923887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.499 [2024-11-29 13:17:01.923901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.499 qpair failed and we were unable to recover it. 00:32:59.499 [2024-11-29 13:17:01.933870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.499 [2024-11-29 13:17:01.933920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:01.933937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:01.933944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:01.933950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:01.933964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:01.943757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:01.943798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:01.943812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:01.943819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:01.943825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:01.943839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:01.953914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:01.953959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:01.953973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:01.953980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:01.953986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:01.954000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:01.963946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:01.964004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:01.964029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:01.964038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:01.964045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:01.964064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:01.974006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:01.974058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:01.974074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:01.974081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:01.974088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:01.974107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:01.984002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:01.984045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:01.984059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:01.984066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:01.984073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:01.984087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:01.994012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:01.994056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:01.994069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:01.994076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:01.994082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:01.994096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.004053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.004098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.004111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.004119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.004126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.004140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.014134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.014187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.014201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.014208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.014214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.014228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.024100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.024143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.024157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.024168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.024174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.024188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.034035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.034086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.034101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.034108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.034115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.034129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.044095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.044141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.044161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.044169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.044177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.044191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.054240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.054288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.054301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.054309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.054315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.054329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.064256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.064332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.064349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.064357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.064363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.064377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.074242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.074312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.074326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.074332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.074339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.074352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.084252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.084297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.084310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.084318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.084324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.084337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.094326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.094374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.094387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.094394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.094400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.094413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.104208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.104251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.104264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.104271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.104277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.104294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.114358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.114402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.114415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.114423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.114429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.114442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.124385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.124433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.124446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.124454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.124460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.124473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.134521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.134616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.134630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.134637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.134644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.134657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.144416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.144460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.144473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.144480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.144487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.144500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.500 [2024-11-29 13:17:02.154454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.500 [2024-11-29 13:17:02.154502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.500 [2024-11-29 13:17:02.154516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.500 [2024-11-29 13:17:02.154523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.500 [2024-11-29 13:17:02.154529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.500 [2024-11-29 13:17:02.154543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.500 qpair failed and we were unable to recover it. 00:32:59.501 [2024-11-29 13:17:02.164507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.501 [2024-11-29 13:17:02.164556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.501 [2024-11-29 13:17:02.164571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.501 [2024-11-29 13:17:02.164578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.501 [2024-11-29 13:17:02.164585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.501 [2024-11-29 13:17:02.164603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.501 qpair failed and we were unable to recover it. 00:32:59.501 [2024-11-29 13:17:02.174580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.501 [2024-11-29 13:17:02.174677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.501 [2024-11-29 13:17:02.174691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.501 [2024-11-29 13:17:02.174698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.501 [2024-11-29 13:17:02.174705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.501 [2024-11-29 13:17:02.174719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.501 qpair failed and we were unable to recover it. 00:32:59.762 [2024-11-29 13:17:02.184521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.762 [2024-11-29 13:17:02.184562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.762 [2024-11-29 13:17:02.184575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.762 [2024-11-29 13:17:02.184582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.762 [2024-11-29 13:17:02.184588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.762 [2024-11-29 13:17:02.184602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.762 qpair failed and we were unable to recover it. 00:32:59.762 [2024-11-29 13:17:02.194551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.762 [2024-11-29 13:17:02.194608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.762 [2024-11-29 13:17:02.194624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.762 [2024-11-29 13:17:02.194631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.762 [2024-11-29 13:17:02.194637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.762 [2024-11-29 13:17:02.194651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.762 qpair failed and we were unable to recover it. 00:32:59.762 [2024-11-29 13:17:02.204605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.762 [2024-11-29 13:17:02.204653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.762 [2024-11-29 13:17:02.204666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.762 [2024-11-29 13:17:02.204673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.762 [2024-11-29 13:17:02.204679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.762 [2024-11-29 13:17:02.204693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.762 qpair failed and we were unable to recover it. 00:32:59.762 [2024-11-29 13:17:02.214663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.762 [2024-11-29 13:17:02.214716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.762 [2024-11-29 13:17:02.214729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.762 [2024-11-29 13:17:02.214736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.762 [2024-11-29 13:17:02.214743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.762 [2024-11-29 13:17:02.214756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.762 qpair failed and we were unable to recover it. 00:32:59.762 [2024-11-29 13:17:02.224636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.762 [2024-11-29 13:17:02.224683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.762 [2024-11-29 13:17:02.224696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.762 [2024-11-29 13:17:02.224703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.762 [2024-11-29 13:17:02.224709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.762 [2024-11-29 13:17:02.224723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.762 qpair failed and we were unable to recover it. 00:32:59.762 [2024-11-29 13:17:02.234676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.762 [2024-11-29 13:17:02.234719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.762 [2024-11-29 13:17:02.234733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.762 [2024-11-29 13:17:02.234740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.762 [2024-11-29 13:17:02.234746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.762 [2024-11-29 13:17:02.234764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.762 qpair failed and we were unable to recover it. 00:32:59.762 [2024-11-29 13:17:02.244691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.762 [2024-11-29 13:17:02.244737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.244750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.244757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.244764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.244777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.254770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.254824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.254838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.254845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.254851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.254865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.264769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.264858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.264871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.264878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.264884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.264898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.274754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.274803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.274816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.274823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.274829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.274843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.284803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.284848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.284862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.284869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.284875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.284889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.294840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.294887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.294901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.294908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.294915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.294928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.304859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.304902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.304915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.304922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.304928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.304941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.314896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.314947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.314972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.314981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.314988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.315006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.324919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.324965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.324980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.324992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.324999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.325013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.334857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.334908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.334921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.334928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.334935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.334949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.344952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.344995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.345009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.345016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.345023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.345037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.354991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.355034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.355049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.355056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.355062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.355076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.364987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.365033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.365047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.365053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.365060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.365077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.375086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.375133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.375147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.375154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.375164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.375178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.385071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.385140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.385153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.385164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.385170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.385184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.395089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.395133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.395146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.395153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.395164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.395177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.405116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.405168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.405181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.405188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.405195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.405209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.415186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.415240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.415254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.415261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.415267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.415280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.425042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.425086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.425099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.425106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.425113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.425126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:32:59.763 [2024-11-29 13:17:02.435196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:59.763 [2024-11-29 13:17:02.435240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:59.763 [2024-11-29 13:17:02.435254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:59.763 [2024-11-29 13:17:02.435261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:59.763 [2024-11-29 13:17:02.435267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:32:59.763 [2024-11-29 13:17:02.435280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:59.763 qpair failed and we were unable to recover it. 00:33:00.026 [2024-11-29 13:17:02.445226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.026 [2024-11-29 13:17:02.445273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.026 [2024-11-29 13:17:02.445286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.026 [2024-11-29 13:17:02.445293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.026 [2024-11-29 13:17:02.445300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.026 [2024-11-29 13:17:02.445313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.026 qpair failed and we were unable to recover it. 00:33:00.026 [2024-11-29 13:17:02.455304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.026 [2024-11-29 13:17:02.455359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.026 [2024-11-29 13:17:02.455372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.026 [2024-11-29 13:17:02.455383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.026 [2024-11-29 13:17:02.455389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.026 [2024-11-29 13:17:02.455402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.026 qpair failed and we were unable to recover it. 00:33:00.026 [2024-11-29 13:17:02.465287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.026 [2024-11-29 13:17:02.465329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.465343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.465350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.465357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.465370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.475293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.475339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.475353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.475360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.475366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.475379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.485342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.485392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.485408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.485415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.485421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.485435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.495389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.495452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.495465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.495472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.495479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.495496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.505388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.505435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.505448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.505455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.505462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.505475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.515426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.515470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.515483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.515490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.515497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.515510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.525447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.525492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.525505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.525513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.525519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.525532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.535399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.535451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.535466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.535474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.535480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.535495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.545489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.545538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.545552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.545559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.545565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.545579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.555535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.555582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.555596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.555603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.555609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.555623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.565543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.565590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.565603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.565611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.565617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.565631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.575625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.575677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.575691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.575698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.575704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.575718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.585597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.585642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.585655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.027 [2024-11-29 13:17:02.585666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.027 [2024-11-29 13:17:02.585672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.027 [2024-11-29 13:17:02.585686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.027 qpair failed and we were unable to recover it. 00:33:00.027 [2024-11-29 13:17:02.595627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.027 [2024-11-29 13:17:02.595671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.027 [2024-11-29 13:17:02.595684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.595691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.595697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.595710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.605637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.605684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.605697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.605704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.605711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.605724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.615704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.615754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.615767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.615774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.615781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.615794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.625696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.625761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.625774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.625781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.625788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.625805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.635730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.635823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.635837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.635844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.635850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.635864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.645747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.645792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.645806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.645814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.645821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.645834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.655831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.655881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.655895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.655902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.655909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.655922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.665822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.665873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.665886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.665893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.665900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.665913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.675838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.675893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.675919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.675928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.675935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.675954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.685879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.685930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.685947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.685955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.685962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.685977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.028 [2024-11-29 13:17:02.695941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.028 [2024-11-29 13:17:02.695995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.028 [2024-11-29 13:17:02.696010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.028 [2024-11-29 13:17:02.696017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.028 [2024-11-29 13:17:02.696023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.028 [2024-11-29 13:17:02.696038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.028 qpair failed and we were unable to recover it. 00:33:00.291 [2024-11-29 13:17:02.705923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.291 [2024-11-29 13:17:02.705966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.291 [2024-11-29 13:17:02.705979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.291 [2024-11-29 13:17:02.705986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.291 [2024-11-29 13:17:02.705993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.291 [2024-11-29 13:17:02.706006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.291 qpair failed and we were unable to recover it. 00:33:00.291 [2024-11-29 13:17:02.715937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.291 [2024-11-29 13:17:02.715981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.291 [2024-11-29 13:17:02.715995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.291 [2024-11-29 13:17:02.716006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.291 [2024-11-29 13:17:02.716013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.291 [2024-11-29 13:17:02.716027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.291 qpair failed and we were unable to recover it. 00:33:00.291 [2024-11-29 13:17:02.725997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.291 [2024-11-29 13:17:02.726046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.291 [2024-11-29 13:17:02.726060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.291 [2024-11-29 13:17:02.726066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.291 [2024-11-29 13:17:02.726073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.291 [2024-11-29 13:17:02.726086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.291 qpair failed and we were unable to recover it. 00:33:00.291 [2024-11-29 13:17:02.736008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.291 [2024-11-29 13:17:02.736057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.291 [2024-11-29 13:17:02.736070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.291 [2024-11-29 13:17:02.736077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.291 [2024-11-29 13:17:02.736083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.291 [2024-11-29 13:17:02.736097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.291 qpair failed and we were unable to recover it. 00:33:00.291 [2024-11-29 13:17:02.746030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.291 [2024-11-29 13:17:02.746100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.291 [2024-11-29 13:17:02.746114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.291 [2024-11-29 13:17:02.746121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.291 [2024-11-29 13:17:02.746127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.291 [2024-11-29 13:17:02.746141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.291 qpair failed and we were unable to recover it. 00:33:00.291 [2024-11-29 13:17:02.756025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.291 [2024-11-29 13:17:02.756098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.291 [2024-11-29 13:17:02.756112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.291 [2024-11-29 13:17:02.756119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.291 [2024-11-29 13:17:02.756126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.291 [2024-11-29 13:17:02.756143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.291 qpair failed and we were unable to recover it. 00:33:00.291 [2024-11-29 13:17:02.766096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.291 [2024-11-29 13:17:02.766147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.291 [2024-11-29 13:17:02.766165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.291 [2024-11-29 13:17:02.766173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.291 [2024-11-29 13:17:02.766179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.291 [2024-11-29 13:17:02.766193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.291 qpair failed and we were unable to recover it. 00:33:00.291 [2024-11-29 13:17:02.776132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.291 [2024-11-29 13:17:02.776183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.776197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.776204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.776210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.776224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.786127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.786175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.786189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.786196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.786202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.786216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.796164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.796207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.796221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.796228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.796235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.796248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.806188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.806238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.806251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.806259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.806265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.806279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.816205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.816252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.816265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.816273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.816279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.816293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.826242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.826289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.826303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.826310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.826316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.826329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.836227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.836273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.836286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.836293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.836299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.836313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.846301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.846346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.846359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.846370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.846376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.846390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.856336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.856410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.856423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.856430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.856436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.856450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.866217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.866263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.866277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.866284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.866290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.866303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.876378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.876423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.876436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.876443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.876449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.876462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.886393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.886446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.886459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.886466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.886472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.886492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.896446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.896495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.896507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.896514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.292 [2024-11-29 13:17:02.896521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.292 [2024-11-29 13:17:02.896534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.292 qpair failed and we were unable to recover it. 00:33:00.292 [2024-11-29 13:17:02.906552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.292 [2024-11-29 13:17:02.906597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.292 [2024-11-29 13:17:02.906610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.292 [2024-11-29 13:17:02.906617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.293 [2024-11-29 13:17:02.906623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.293 [2024-11-29 13:17:02.906636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.293 qpair failed and we were unable to recover it. 00:33:00.293 [2024-11-29 13:17:02.916481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.293 [2024-11-29 13:17:02.916522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.293 [2024-11-29 13:17:02.916535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.293 [2024-11-29 13:17:02.916542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.293 [2024-11-29 13:17:02.916549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.293 [2024-11-29 13:17:02.916562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.293 qpair failed and we were unable to recover it. 00:33:00.293 [2024-11-29 13:17:02.926479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.293 [2024-11-29 13:17:02.926524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.293 [2024-11-29 13:17:02.926538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.293 [2024-11-29 13:17:02.926544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.293 [2024-11-29 13:17:02.926551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.293 [2024-11-29 13:17:02.926564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.293 qpair failed and we were unable to recover it. 00:33:00.293 [2024-11-29 13:17:02.936558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.293 [2024-11-29 13:17:02.936607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.293 [2024-11-29 13:17:02.936621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.293 [2024-11-29 13:17:02.936628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.293 [2024-11-29 13:17:02.936634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.293 [2024-11-29 13:17:02.936648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.293 qpair failed and we were unable to recover it. 00:33:00.293 [2024-11-29 13:17:02.946525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.293 [2024-11-29 13:17:02.946570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.293 [2024-11-29 13:17:02.946583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.293 [2024-11-29 13:17:02.946590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.293 [2024-11-29 13:17:02.946597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.293 [2024-11-29 13:17:02.946610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.293 qpair failed and we were unable to recover it. 00:33:00.293 [2024-11-29 13:17:02.956589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.293 [2024-11-29 13:17:02.956635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.293 [2024-11-29 13:17:02.956648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.293 [2024-11-29 13:17:02.956655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.293 [2024-11-29 13:17:02.956661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.293 [2024-11-29 13:17:02.956674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.293 qpair failed and we were unable to recover it. 00:33:00.293 [2024-11-29 13:17:02.966590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.293 [2024-11-29 13:17:02.966636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.293 [2024-11-29 13:17:02.966649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.293 [2024-11-29 13:17:02.966656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.293 [2024-11-29 13:17:02.966662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.293 [2024-11-29 13:17:02.966676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.293 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:02.976639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:02.976686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:02.976699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:02.976709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:02.976716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:02.976729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:02.986695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:02.986735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:02.986747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:02.986754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:02.986761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:02.986774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:02.996658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:02.996696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:02.996710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:02.996717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:02.996723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:02.996737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:03.006721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:03.006767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:03.006781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:03.006788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:03.006794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:03.006808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:03.016816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:03.016864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:03.016877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:03.016884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:03.016891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:03.016908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:03.026765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:03.026811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:03.026824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:03.026831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:03.026837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:03.026850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:03.036782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:03.036823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:03.036836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:03.036843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:03.036849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:03.036863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:03.046875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:03.046924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:03.046937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:03.046944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:03.046950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:03.046964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:03.056844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:03.056890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:03.056903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:03.056910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:03.056916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:03.056930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:03.066753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:03.066800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:03.066814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:03.066821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:03.066828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:03.066841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.556 qpair failed and we were unable to recover it. 00:33:00.556 [2024-11-29 13:17:03.076910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.556 [2024-11-29 13:17:03.076949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.556 [2024-11-29 13:17:03.076962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.556 [2024-11-29 13:17:03.076969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.556 [2024-11-29 13:17:03.076976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.556 [2024-11-29 13:17:03.076989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.086942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.086991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.087004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.087013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.087020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.087034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.096970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.097081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.097096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.097107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.097114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.097129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.106979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.107021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.107034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.107045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.107051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.107065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.116978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.117020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.117033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.117040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.117046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.117060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.127000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.127046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.127059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.127066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.127072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.127086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.137090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.137140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.137153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.137164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.137170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.137184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.147079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.147122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.147135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.147142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.147148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.147169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.157113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.157166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.157179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.157186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.157192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.157206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.167146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.167195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.167208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.167215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.167221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.167234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.177177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.177227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.177241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.177248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.177254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.177267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.187195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.187237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.187250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.187257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.187263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.187276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.197225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.197268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.197282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.197289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.197295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.557 [2024-11-29 13:17:03.197309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.557 qpair failed and we were unable to recover it. 00:33:00.557 [2024-11-29 13:17:03.207245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.557 [2024-11-29 13:17:03.207298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.557 [2024-11-29 13:17:03.207311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.557 [2024-11-29 13:17:03.207318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.557 [2024-11-29 13:17:03.207324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.558 [2024-11-29 13:17:03.207338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.558 qpair failed and we were unable to recover it. 00:33:00.558 [2024-11-29 13:17:03.217289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.558 [2024-11-29 13:17:03.217367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.558 [2024-11-29 13:17:03.217380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.558 [2024-11-29 13:17:03.217387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.558 [2024-11-29 13:17:03.217393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.558 [2024-11-29 13:17:03.217407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.558 qpair failed and we were unable to recover it. 00:33:00.558 [2024-11-29 13:17:03.227294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.558 [2024-11-29 13:17:03.227342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.558 [2024-11-29 13:17:03.227355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.558 [2024-11-29 13:17:03.227362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.558 [2024-11-29 13:17:03.227368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.558 [2024-11-29 13:17:03.227381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.558 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.237317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.237363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.237376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.820 [2024-11-29 13:17:03.237386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.820 [2024-11-29 13:17:03.237393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.820 [2024-11-29 13:17:03.237406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.820 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.247339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.247385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.247398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.820 [2024-11-29 13:17:03.247405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.820 [2024-11-29 13:17:03.247411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.820 [2024-11-29 13:17:03.247425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.820 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.257396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.257441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.257454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.820 [2024-11-29 13:17:03.257461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.820 [2024-11-29 13:17:03.257467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.820 [2024-11-29 13:17:03.257480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.820 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.267388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.267436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.267449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.820 [2024-11-29 13:17:03.267456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.820 [2024-11-29 13:17:03.267462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.820 [2024-11-29 13:17:03.267476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.820 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.277400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.277442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.277455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.820 [2024-11-29 13:17:03.277462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.820 [2024-11-29 13:17:03.277468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.820 [2024-11-29 13:17:03.277485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.820 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.287475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.287526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.287539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.820 [2024-11-29 13:17:03.287546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.820 [2024-11-29 13:17:03.287552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.820 [2024-11-29 13:17:03.287565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.820 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.297474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.297525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.297537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.820 [2024-11-29 13:17:03.297544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.820 [2024-11-29 13:17:03.297550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.820 [2024-11-29 13:17:03.297563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.820 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.307502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.307561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.307573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.820 [2024-11-29 13:17:03.307580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.820 [2024-11-29 13:17:03.307587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.820 [2024-11-29 13:17:03.307600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.820 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.317538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.317585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.317598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.820 [2024-11-29 13:17:03.317605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.820 [2024-11-29 13:17:03.317611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.820 [2024-11-29 13:17:03.317625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.820 qpair failed and we were unable to recover it. 00:33:00.820 [2024-11-29 13:17:03.327543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.820 [2024-11-29 13:17:03.327594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.820 [2024-11-29 13:17:03.327607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.327614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.327620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.327634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.337589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.337640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.337653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.337660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.337666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.337679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.347604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.347647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.347660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.347667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.347673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.347686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.357696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.357741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.357754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.357761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.357767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.357781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.367721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.367808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.367821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.367831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.367838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.367852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.377717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.377793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.377807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.377814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.377820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.377833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.387736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.387782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.387795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.387802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.387809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.387822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.397762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.397803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.397816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.397823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.397829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.397842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.407788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.407834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.407847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.407854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.407861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.407877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.417830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.417880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.417893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.417900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.417906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.417919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.427824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.427866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.427880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.427887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.427893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.427906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.437866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.437957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.437970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.437977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.437984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.437997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.447898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.447942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.447955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.447962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.447969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.447982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.457923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.457971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.457985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.457992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.457998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.458011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.467943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.467994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.468007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.468014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.821 [2024-11-29 13:17:03.468020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.821 [2024-11-29 13:17:03.468034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.821 qpair failed and we were unable to recover it. 00:33:00.821 [2024-11-29 13:17:03.477962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.821 [2024-11-29 13:17:03.478005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.821 [2024-11-29 13:17:03.478021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.821 [2024-11-29 13:17:03.478028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.822 [2024-11-29 13:17:03.478035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.822 [2024-11-29 13:17:03.478049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.822 qpair failed and we were unable to recover it. 00:33:00.822 [2024-11-29 13:17:03.488007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:00.822 [2024-11-29 13:17:03.488056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:00.822 [2024-11-29 13:17:03.488071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:00.822 [2024-11-29 13:17:03.488078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:00.822 [2024-11-29 13:17:03.488085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:00.822 [2024-11-29 13:17:03.488103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:00.822 qpair failed and we were unable to recover it. 00:33:01.084 [2024-11-29 13:17:03.498043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.084 [2024-11-29 13:17:03.498106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.084 [2024-11-29 13:17:03.498120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.084 [2024-11-29 13:17:03.498131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.084 [2024-11-29 13:17:03.498138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.084 [2024-11-29 13:17:03.498152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-11-29 13:17:03.508064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.084 [2024-11-29 13:17:03.508106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.084 [2024-11-29 13:17:03.508119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.084 [2024-11-29 13:17:03.508126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.084 [2024-11-29 13:17:03.508133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.084 [2024-11-29 13:17:03.508146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-11-29 13:17:03.518080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.084 [2024-11-29 13:17:03.518125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.084 [2024-11-29 13:17:03.518139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.084 [2024-11-29 13:17:03.518146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.084 [2024-11-29 13:17:03.518152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.084 [2024-11-29 13:17:03.518170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-11-29 13:17:03.528108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.084 [2024-11-29 13:17:03.528161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.084 [2024-11-29 13:17:03.528174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.084 [2024-11-29 13:17:03.528181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.084 [2024-11-29 13:17:03.528187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.084 [2024-11-29 13:17:03.528201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-11-29 13:17:03.538148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.084 [2024-11-29 13:17:03.538201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.084 [2024-11-29 13:17:03.538215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.084 [2024-11-29 13:17:03.538222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.084 [2024-11-29 13:17:03.538228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.084 [2024-11-29 13:17:03.538245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-11-29 13:17:03.548177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.084 [2024-11-29 13:17:03.548222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.084 [2024-11-29 13:17:03.548235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.084 [2024-11-29 13:17:03.548242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.084 [2024-11-29 13:17:03.548248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.084 [2024-11-29 13:17:03.548261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.084 qpair failed and we were unable to recover it. 00:33:01.084 [2024-11-29 13:17:03.558185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.084 [2024-11-29 13:17:03.558234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.558248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.558255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.558261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.558275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.568185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.568233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.568246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.568252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.568259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.568272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.578247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.578302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.578315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.578322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.578329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.578342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.588258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.588327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.588341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.588348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.588354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.588368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.598186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.598229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.598242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.598249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.598255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.598268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.608326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.608376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.608389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.608395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.608402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.608415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.618398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.618489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.618502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.618509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.618515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.618528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.628397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.628470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.628483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.628494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.628500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.628513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.638388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.638432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.638445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.638452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.638458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.638471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.648458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.648502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.648517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.648524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.648530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.648544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.658375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.658421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.658436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.658443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.658449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.658464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.668491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.668532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.668546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.668553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.668559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.668576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.678492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.678537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.678550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.085 [2024-11-29 13:17:03.678558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.085 [2024-11-29 13:17:03.678564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.085 [2024-11-29 13:17:03.678577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.085 qpair failed and we were unable to recover it. 00:33:01.085 [2024-11-29 13:17:03.688507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.085 [2024-11-29 13:17:03.688572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.085 [2024-11-29 13:17:03.688585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.086 [2024-11-29 13:17:03.688592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.086 [2024-11-29 13:17:03.688598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.086 [2024-11-29 13:17:03.688612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-11-29 13:17:03.698580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.086 [2024-11-29 13:17:03.698643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.086 [2024-11-29 13:17:03.698657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.086 [2024-11-29 13:17:03.698664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.086 [2024-11-29 13:17:03.698670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.086 [2024-11-29 13:17:03.698683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-11-29 13:17:03.708590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.086 [2024-11-29 13:17:03.708642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.086 [2024-11-29 13:17:03.708655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.086 [2024-11-29 13:17:03.708663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.086 [2024-11-29 13:17:03.708669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.086 [2024-11-29 13:17:03.708683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-11-29 13:17:03.718583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.086 [2024-11-29 13:17:03.718631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.086 [2024-11-29 13:17:03.718645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.086 [2024-11-29 13:17:03.718652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.086 [2024-11-29 13:17:03.718658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.086 [2024-11-29 13:17:03.718672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-11-29 13:17:03.728672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.086 [2024-11-29 13:17:03.728716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.086 [2024-11-29 13:17:03.728729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.086 [2024-11-29 13:17:03.728736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.086 [2024-11-29 13:17:03.728743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.086 [2024-11-29 13:17:03.728756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-11-29 13:17:03.738698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.086 [2024-11-29 13:17:03.738750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.086 [2024-11-29 13:17:03.738763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.086 [2024-11-29 13:17:03.738771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.086 [2024-11-29 13:17:03.738777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.086 [2024-11-29 13:17:03.738791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-11-29 13:17:03.748698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.086 [2024-11-29 13:17:03.748753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.086 [2024-11-29 13:17:03.748767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.086 [2024-11-29 13:17:03.748774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.086 [2024-11-29 13:17:03.748780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.086 [2024-11-29 13:17:03.748794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.086 [2024-11-29 13:17:03.758745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.086 [2024-11-29 13:17:03.758792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.086 [2024-11-29 13:17:03.758806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.086 [2024-11-29 13:17:03.758818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.086 [2024-11-29 13:17:03.758825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.086 [2024-11-29 13:17:03.758839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.086 qpair failed and we were unable to recover it. 00:33:01.349 [2024-11-29 13:17:03.768759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.349 [2024-11-29 13:17:03.768811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.349 [2024-11-29 13:17:03.768825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.349 [2024-11-29 13:17:03.768832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.349 [2024-11-29 13:17:03.768839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.349 [2024-11-29 13:17:03.768853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.349 qpair failed and we were unable to recover it. 00:33:01.349 [2024-11-29 13:17:03.778767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.349 [2024-11-29 13:17:03.778816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.349 [2024-11-29 13:17:03.778830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.349 [2024-11-29 13:17:03.778837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.349 [2024-11-29 13:17:03.778843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.349 [2024-11-29 13:17:03.778857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.349 qpair failed and we were unable to recover it. 00:33:01.349 [2024-11-29 13:17:03.788776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.349 [2024-11-29 13:17:03.788824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.349 [2024-11-29 13:17:03.788837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.349 [2024-11-29 13:17:03.788844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.349 [2024-11-29 13:17:03.788850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.349 [2024-11-29 13:17:03.788864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.349 qpair failed and we were unable to recover it. 00:33:01.349 [2024-11-29 13:17:03.798845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.349 [2024-11-29 13:17:03.798917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.349 [2024-11-29 13:17:03.798931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.349 [2024-11-29 13:17:03.798938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.349 [2024-11-29 13:17:03.798944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.349 [2024-11-29 13:17:03.798961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.349 qpair failed and we were unable to recover it. 00:33:01.349 [2024-11-29 13:17:03.808849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.349 [2024-11-29 13:17:03.808903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.349 [2024-11-29 13:17:03.808928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.349 [2024-11-29 13:17:03.808937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.349 [2024-11-29 13:17:03.808944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.349 [2024-11-29 13:17:03.808963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.349 qpair failed and we were unable to recover it. 00:33:01.349 [2024-11-29 13:17:03.818909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.349 [2024-11-29 13:17:03.818973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.349 [2024-11-29 13:17:03.818998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.349 [2024-11-29 13:17:03.819007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.349 [2024-11-29 13:17:03.819014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.349 [2024-11-29 13:17:03.819032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.349 qpair failed and we were unable to recover it. 00:33:01.349 [2024-11-29 13:17:03.828932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.349 [2024-11-29 13:17:03.829047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.349 [2024-11-29 13:17:03.829062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.349 [2024-11-29 13:17:03.829070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.349 [2024-11-29 13:17:03.829077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.349 [2024-11-29 13:17:03.829091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.349 qpair failed and we were unable to recover it. 00:33:01.349 [2024-11-29 13:17:03.838946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.349 [2024-11-29 13:17:03.838997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.349 [2024-11-29 13:17:03.839011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.349 [2024-11-29 13:17:03.839018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.349 [2024-11-29 13:17:03.839024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.349 [2024-11-29 13:17:03.839038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.349 qpair failed and we were unable to recover it. 00:33:01.349 [2024-11-29 13:17:03.848972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.349 [2024-11-29 13:17:03.849031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.349 [2024-11-29 13:17:03.849045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.849052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.849058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.849072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.859025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.859094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.859108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.859115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.859122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.859135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.868934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.869002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.869017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.869024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.869030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.869045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.879067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.879108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.879122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.879129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.879135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.879149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.889104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.889154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.889171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.889186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.889192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.889206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.899118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.899205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.899219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.899226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.899232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.899246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.909161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.909247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.909260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.909267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.909274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.909287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.919174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.919236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.919249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.919256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.919263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.919276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.929191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.929239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.929253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.929260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.929266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.929283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.939126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.939178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.939192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.939199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.939205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.939219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.949299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.949340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.949353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.949360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.949367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.949380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.959285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.959334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.959347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.959354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.959361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.959374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.969367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.969442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.969455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.969462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.350 [2024-11-29 13:17:03.969468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.350 [2024-11-29 13:17:03.969482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.350 qpair failed and we were unable to recover it. 00:33:01.350 [2024-11-29 13:17:03.979344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.350 [2024-11-29 13:17:03.979391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.350 [2024-11-29 13:17:03.979404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.350 [2024-11-29 13:17:03.979411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.351 [2024-11-29 13:17:03.979417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.351 [2024-11-29 13:17:03.979431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.351 qpair failed and we were unable to recover it. 00:33:01.351 [2024-11-29 13:17:03.989367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.351 [2024-11-29 13:17:03.989415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.351 [2024-11-29 13:17:03.989428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.351 [2024-11-29 13:17:03.989435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.351 [2024-11-29 13:17:03.989441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.351 [2024-11-29 13:17:03.989455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.351 qpair failed and we were unable to recover it. 00:33:01.351 [2024-11-29 13:17:03.999424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.351 [2024-11-29 13:17:03.999496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.351 [2024-11-29 13:17:03.999509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.351 [2024-11-29 13:17:03.999516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.351 [2024-11-29 13:17:03.999522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.351 [2024-11-29 13:17:03.999535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.351 qpair failed and we were unable to recover it. 00:33:01.351 [2024-11-29 13:17:04.009354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.351 [2024-11-29 13:17:04.009401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.351 [2024-11-29 13:17:04.009414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.351 [2024-11-29 13:17:04.009421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.351 [2024-11-29 13:17:04.009427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.351 [2024-11-29 13:17:04.009441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.351 qpair failed and we were unable to recover it. 00:33:01.351 [2024-11-29 13:17:04.019471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.351 [2024-11-29 13:17:04.019520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.351 [2024-11-29 13:17:04.019533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.351 [2024-11-29 13:17:04.019544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.351 [2024-11-29 13:17:04.019550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.351 [2024-11-29 13:17:04.019563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.351 qpair failed and we were unable to recover it. 00:33:01.613 [2024-11-29 13:17:04.029486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.613 [2024-11-29 13:17:04.029529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.613 [2024-11-29 13:17:04.029544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.613 [2024-11-29 13:17:04.029551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.613 [2024-11-29 13:17:04.029557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.613 [2024-11-29 13:17:04.029571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.613 qpair failed and we were unable to recover it. 00:33:01.613 [2024-11-29 13:17:04.039478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.613 [2024-11-29 13:17:04.039520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.613 [2024-11-29 13:17:04.039533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.613 [2024-11-29 13:17:04.039540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.613 [2024-11-29 13:17:04.039546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.613 [2024-11-29 13:17:04.039560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.613 qpair failed and we were unable to recover it. 00:33:01.613 [2024-11-29 13:17:04.049530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.613 [2024-11-29 13:17:04.049580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.613 [2024-11-29 13:17:04.049593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.613 [2024-11-29 13:17:04.049600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.613 [2024-11-29 13:17:04.049607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.613 [2024-11-29 13:17:04.049620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.613 qpair failed and we were unable to recover it. 00:33:01.613 [2024-11-29 13:17:04.059556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.613 [2024-11-29 13:17:04.059598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.613 [2024-11-29 13:17:04.059611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.613 [2024-11-29 13:17:04.059618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.613 [2024-11-29 13:17:04.059624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.613 [2024-11-29 13:17:04.059641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.613 qpair failed and we were unable to recover it. 00:33:01.613 [2024-11-29 13:17:04.069583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.613 [2024-11-29 13:17:04.069633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.069646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.069653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.069660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.069673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.079557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.079601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.079614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.079621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.079628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.079641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.089604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.089647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.089660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.089668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.089674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.089687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.099674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.099727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.099740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.099747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.099753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.099766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.109693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.109756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.109770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.109777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.109783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.109796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.119719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.119762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.119776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.119783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.119789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.119802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.129766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.129811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.129824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.129831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.129838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.129851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.139793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.139844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.139857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.139864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.139870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.139883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.149790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.149832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.149850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.149860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.149867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.149882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.159708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.159752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.159766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.159773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.159780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.159794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.169863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.169910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.169924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.169931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.169937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.169951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.179902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.179972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.179997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.180006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.180012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaef0c0 00:33:01.614 [2024-11-29 13:17:04.180031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.189922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.190035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.190101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.614 [2024-11-29 13:17:04.190126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.614 [2024-11-29 13:17:04.190146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f00cc000b90 00:33:01.614 [2024-11-29 13:17:04.190226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.614 qpair failed and we were unable to recover it. 00:33:01.614 [2024-11-29 13:17:04.199955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:01.614 [2024-11-29 13:17:04.200046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:01.614 [2024-11-29 13:17:04.200093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:01.615 [2024-11-29 13:17:04.200112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:01.615 [2024-11-29 13:17:04.200128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f00cc000b90 00:33:01.615 [2024-11-29 13:17:04.200177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:01.615 qpair failed and we were unable to recover it. 00:33:01.615 [2024-11-29 13:17:04.200331] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:33:01.615 A controller has encountered a failure and is being reset. 00:33:01.615 [2024-11-29 13:17:04.200455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae4e10 (9): Bad file descriptor 00:33:01.615 Controller properly reset. 00:33:01.615 Initializing NVMe Controllers 00:33:01.615 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:01.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:01.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:01.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:01.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:01.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:01.615 Initialization complete. Launching workers. 00:33:01.615 Starting thread on core 1 00:33:01.615 Starting thread on core 2 00:33:01.615 Starting thread on core 3 00:33:01.615 Starting thread on core 0 00:33:01.615 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:01.615 00:33:01.615 real 0m11.287s 00:33:01.615 user 0m22.119s 00:33:01.615 sys 0m3.864s 00:33:01.615 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.615 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:01.615 ************************************ 00:33:01.615 END TEST nvmf_target_disconnect_tc2 00:33:01.615 ************************************ 00:33:01.615 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:01.615 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:01.615 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:01.615 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:01.615 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:01.876 rmmod nvme_tcp 00:33:01.876 rmmod nvme_fabrics 00:33:01.876 rmmod nvme_keyring 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1116719 ']' 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1116719 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1116719 ']' 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1116719 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1116719 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1116719' 00:33:01.876 killing process with pid 1116719 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1116719 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1116719 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:01.876 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:33:02.137 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:02.137 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:02.137 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.137 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.137 13:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.052 13:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:04.052 00:33:04.052 real 0m21.645s 00:33:04.052 user 0m49.580s 00:33:04.052 sys 0m9.938s 00:33:04.052 13:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.052 13:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:04.052 ************************************ 00:33:04.052 END TEST nvmf_target_disconnect 00:33:04.052 ************************************ 00:33:04.052 13:17:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:04.052 00:33:04.052 real 6m33.761s 00:33:04.052 user 11m25.600s 00:33:04.052 sys 2m16.089s 00:33:04.052 13:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.052 13:17:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.052 ************************************ 00:33:04.052 END TEST nvmf_host 00:33:04.052 ************************************ 00:33:04.052 13:17:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:33:04.052 13:17:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:33:04.052 13:17:06 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:04.052 13:17:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:04.052 13:17:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.052 13:17:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:04.313 ************************************ 00:33:04.313 START TEST nvmf_target_core_interrupt_mode 00:33:04.313 ************************************ 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:04.313 * Looking for test storage... 00:33:04.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.313 --rc genhtml_branch_coverage=1 00:33:04.313 --rc genhtml_function_coverage=1 00:33:04.313 --rc genhtml_legend=1 00:33:04.313 --rc geninfo_all_blocks=1 00:33:04.313 --rc geninfo_unexecuted_blocks=1 00:33:04.313 00:33:04.313 ' 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.313 --rc genhtml_branch_coverage=1 00:33:04.313 --rc genhtml_function_coverage=1 00:33:04.313 --rc genhtml_legend=1 00:33:04.313 --rc geninfo_all_blocks=1 00:33:04.313 --rc geninfo_unexecuted_blocks=1 00:33:04.313 00:33:04.313 ' 00:33:04.313 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.313 --rc genhtml_branch_coverage=1 00:33:04.314 --rc genhtml_function_coverage=1 00:33:04.314 --rc genhtml_legend=1 00:33:04.314 --rc geninfo_all_blocks=1 00:33:04.314 --rc geninfo_unexecuted_blocks=1 00:33:04.314 00:33:04.314 ' 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:04.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.314 --rc genhtml_branch_coverage=1 00:33:04.314 --rc genhtml_function_coverage=1 00:33:04.314 --rc genhtml_legend=1 00:33:04.314 --rc geninfo_all_blocks=1 00:33:04.314 --rc geninfo_unexecuted_blocks=1 00:33:04.314 00:33:04.314 ' 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.314 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.575 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.575 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.575 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.575 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.575 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.575 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.575 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:04.576 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:04.576 ************************************ 00:33:04.576 START TEST nvmf_abort 00:33:04.576 ************************************ 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:04.576 * Looking for test storage... 00:33:04.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:04.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.576 --rc genhtml_branch_coverage=1 00:33:04.576 --rc genhtml_function_coverage=1 00:33:04.576 --rc genhtml_legend=1 00:33:04.576 --rc geninfo_all_blocks=1 00:33:04.576 --rc geninfo_unexecuted_blocks=1 00:33:04.576 00:33:04.576 ' 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:04.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.576 --rc genhtml_branch_coverage=1 00:33:04.576 --rc genhtml_function_coverage=1 00:33:04.576 --rc genhtml_legend=1 00:33:04.576 --rc geninfo_all_blocks=1 00:33:04.576 --rc geninfo_unexecuted_blocks=1 00:33:04.576 00:33:04.576 ' 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:04.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.576 --rc genhtml_branch_coverage=1 00:33:04.576 --rc genhtml_function_coverage=1 00:33:04.576 --rc genhtml_legend=1 00:33:04.576 --rc geninfo_all_blocks=1 00:33:04.576 --rc geninfo_unexecuted_blocks=1 00:33:04.576 00:33:04.576 ' 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:04.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.576 --rc genhtml_branch_coverage=1 00:33:04.576 --rc genhtml_function_coverage=1 00:33:04.576 --rc genhtml_legend=1 00:33:04.576 --rc geninfo_all_blocks=1 00:33:04.576 --rc geninfo_unexecuted_blocks=1 00:33:04.576 00:33:04.576 ' 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.576 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:33:04.839 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:13.118 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:13.118 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:13.118 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:13.118 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.118 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:33:13.118 00:33:13.118 --- 10.0.0.2 ping statistics --- 00:33:13.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.118 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:33:13.119 00:33:13.119 --- 10.0.0.1 ping statistics --- 00:33:13.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.119 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1122232 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1122232 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1122232 ']' 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.119 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.119 [2024-11-29 13:17:14.907916] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:13.119 [2024-11-29 13:17:14.909064] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:33:13.119 [2024-11-29 13:17:14.909116] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.119 [2024-11-29 13:17:15.008538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:13.119 [2024-11-29 13:17:15.060682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.119 [2024-11-29 13:17:15.060735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.119 [2024-11-29 13:17:15.060744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.119 [2024-11-29 13:17:15.060751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.119 [2024-11-29 13:17:15.060758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.119 [2024-11-29 13:17:15.062823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:13.119 [2024-11-29 13:17:15.062985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.119 [2024-11-29 13:17:15.062987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:13.119 [2024-11-29 13:17:15.142111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:13.119 [2024-11-29 13:17:15.143151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:13.119 [2024-11-29 13:17:15.143763] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:13.119 [2024-11-29 13:17:15.143879] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.119 [2024-11-29 13:17:15.771896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.119 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.380 Malloc0 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.380 Delay0 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.380 [2024-11-29 13:17:15.871831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.380 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:13.380 [2024-11-29 13:17:16.016934] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:15.927 Initializing NVMe Controllers 00:33:15.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:15.927 controller IO queue size 128 less than required 00:33:15.927 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:33:15.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:15.927 Initialization complete. Launching workers. 00:33:15.927 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28608 00:33:15.927 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28665, failed to submit 66 00:33:15.927 success 28608, unsuccessful 57, failed 0 00:33:15.927 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:15.927 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.927 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:15.927 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.927 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:15.927 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.928 rmmod nvme_tcp 00:33:15.928 rmmod nvme_fabrics 00:33:15.928 rmmod nvme_keyring 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1122232 ']' 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1122232 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1122232 ']' 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1122232 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1122232 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1122232' 00:33:15.928 killing process with pid 1122232 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1122232 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1122232 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.928 13:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:18.475 00:33:18.475 real 0m13.549s 00:33:18.475 user 0m11.398s 00:33:18.475 sys 0m6.933s 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:18.475 ************************************ 00:33:18.475 END TEST nvmf_abort 00:33:18.475 ************************************ 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:18.475 ************************************ 00:33:18.475 START TEST nvmf_ns_hotplug_stress 00:33:18.475 ************************************ 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:18.475 * Looking for test storage... 00:33:18.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:18.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.475 --rc genhtml_branch_coverage=1 00:33:18.475 --rc genhtml_function_coverage=1 00:33:18.475 --rc genhtml_legend=1 00:33:18.475 --rc geninfo_all_blocks=1 00:33:18.475 --rc geninfo_unexecuted_blocks=1 00:33:18.475 00:33:18.475 ' 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:18.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.475 --rc genhtml_branch_coverage=1 00:33:18.475 --rc genhtml_function_coverage=1 00:33:18.475 --rc genhtml_legend=1 00:33:18.475 --rc geninfo_all_blocks=1 00:33:18.475 --rc geninfo_unexecuted_blocks=1 00:33:18.475 00:33:18.475 ' 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:18.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.475 --rc genhtml_branch_coverage=1 00:33:18.475 --rc genhtml_function_coverage=1 00:33:18.475 --rc genhtml_legend=1 00:33:18.475 --rc geninfo_all_blocks=1 00:33:18.475 --rc geninfo_unexecuted_blocks=1 00:33:18.475 00:33:18.475 ' 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:18.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.475 --rc genhtml_branch_coverage=1 00:33:18.475 --rc genhtml_function_coverage=1 00:33:18.475 --rc genhtml_legend=1 00:33:18.475 --rc geninfo_all_blocks=1 00:33:18.475 --rc geninfo_unexecuted_blocks=1 00:33:18.475 00:33:18.475 ' 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:18.475 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.476 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:26.620 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:26.620 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:26.620 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:26.620 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:26.620 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:26.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:33:26.621 00:33:26.621 --- 10.0.0.2 ping statistics --- 00:33:26.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.621 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:26.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:33:26.621 00:33:26.621 --- 10.0.0.1 ping statistics --- 00:33:26.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.621 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1127151 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1127151 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1127151 ']' 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.621 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:26.621 [2024-11-29 13:17:28.497591] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:26.621 [2024-11-29 13:17:28.498697] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:33:26.621 [2024-11-29 13:17:28.498748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.621 [2024-11-29 13:17:28.599075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:26.621 [2024-11-29 13:17:28.649723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.621 [2024-11-29 13:17:28.649773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.621 [2024-11-29 13:17:28.649782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.621 [2024-11-29 13:17:28.649789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.621 [2024-11-29 13:17:28.649795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.621 [2024-11-29 13:17:28.651618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.621 [2024-11-29 13:17:28.651783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.621 [2024-11-29 13:17:28.651784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:26.621 [2024-11-29 13:17:28.729600] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:26.621 [2024-11-29 13:17:28.730416] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:26.621 [2024-11-29 13:17:28.730949] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:26.621 [2024-11-29 13:17:28.731116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:26.884 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.884 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:33:26.884 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.884 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.884 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:26.884 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.884 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:33:26.884 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:26.884 [2024-11-29 13:17:29.516701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.884 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:27.146 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.408 [2024-11-29 13:17:29.897452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.408 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:27.669 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:33:27.669 Malloc0 00:33:27.669 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:27.929 Delay0 00:33:27.930 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:28.192 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:33:28.192 NULL1 00:33:28.453 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:28.453 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1127531 00:33:28.453 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:28.453 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:33:28.453 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:28.714 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:28.976 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:33:28.976 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:33:29.237 true 00:33:29.237 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:29.237 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:29.237 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:29.498 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:33:29.498 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:33:29.758 true 00:33:29.759 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:29.759 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:30.020 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:30.281 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:33:30.281 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:33:30.281 true 00:33:30.281 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:30.281 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:30.541 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:30.803 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:33:30.803 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:33:31.064 true 00:33:31.064 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:31.064 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:31.064 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:31.324 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:33:31.324 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:33:31.585 true 00:33:31.585 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:31.585 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:31.846 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:31.846 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:33:31.846 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:33:32.107 true 00:33:32.107 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:32.107 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:32.367 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:32.367 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:33:32.367 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:33:32.627 true 00:33:32.627 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:32.627 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:32.886 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:33.145 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:33:33.145 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:33:33.145 true 00:33:33.145 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:33.145 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.405 13:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:33.665 13:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:33:33.666 13:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:33:33.666 true 00:33:33.926 13:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:33.926 13:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.927 13:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:34.187 13:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:33:34.187 13:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:33:34.449 true 00:33:34.449 13:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:34.449 13:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:34.449 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:34.710 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:33:34.710 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:33:34.971 true 00:33:34.971 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:34.971 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.231 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:35.231 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:33:35.231 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:33:35.491 true 00:33:35.492 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:35.492 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.750 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:35.750 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:33:35.750 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:33:36.010 true 00:33:36.010 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:36.010 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:36.271 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:36.531 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:33:36.531 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:33:36.531 true 00:33:36.531 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:36.531 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:36.790 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:37.050 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:33:37.050 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:33:37.050 true 00:33:37.050 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:37.050 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.312 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:37.573 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:33:37.573 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:33:37.834 true 00:33:37.834 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:37.834 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.834 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:38.096 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:33:38.096 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:33:38.356 true 00:33:38.356 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:38.356 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:38.356 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:38.617 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:33:38.617 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:33:38.878 true 00:33:38.878 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:38.878 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:39.139 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:39.139 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:33:39.139 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:33:39.400 true 00:33:39.400 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:39.400 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:39.662 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:39.662 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:33:39.662 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:33:39.922 true 00:33:39.922 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:39.923 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:40.183 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:40.445 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:33:40.445 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:33:40.445 true 00:33:40.445 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:40.445 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:40.706 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:40.968 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:33:40.968 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:33:40.968 true 00:33:40.968 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:40.968 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:41.229 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:41.492 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:33:41.492 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:33:41.492 true 00:33:41.492 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:41.492 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:41.753 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:42.014 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:33:42.014 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:42.275 true 00:33:42.275 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:42.275 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:42.275 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:42.535 13:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:33:42.535 13:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:42.796 true 00:33:42.796 13:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:42.796 13:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:42.796 13:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:43.057 13:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:33:43.057 13:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:43.317 true 00:33:43.318 13:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:43.318 13:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:43.578 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:43.578 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:33:43.579 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:43.839 true 00:33:43.839 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:43.839 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:44.099 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:44.360 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:33:44.360 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:33:44.360 true 00:33:44.360 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:44.360 13:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:44.620 13:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:44.882 13:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:33:44.882 13:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:33:44.882 true 00:33:44.882 13:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:44.882 13:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:45.142 13:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:45.402 13:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:33:45.402 13:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:33:45.402 true 00:33:45.663 13:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:45.663 13:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:45.663 13:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:45.923 13:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:33:45.923 13:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:33:46.183 true 00:33:46.183 13:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:46.183 13:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:46.183 13:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:46.444 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:33:46.444 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:33:46.704 true 00:33:46.704 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:46.704 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:46.964 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:46.964 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:33:46.964 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:33:47.225 true 00:33:47.225 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:47.225 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:47.485 13:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:47.485 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:33:47.485 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:33:47.745 true 00:33:47.745 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:47.745 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:48.005 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:48.265 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:33:48.265 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:33:48.265 true 00:33:48.265 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:48.265 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:48.525 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:48.786 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:33:48.786 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:33:48.786 true 00:33:49.046 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:49.046 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:49.046 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:49.305 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:33:49.305 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:33:49.564 true 00:33:49.564 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:49.564 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:49.564 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:49.823 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:33:49.824 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:33:50.084 true 00:33:50.084 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:50.084 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.344 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:50.344 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:33:50.344 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:33:50.604 true 00:33:50.604 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:50.604 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.863 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:50.863 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:33:50.863 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:33:51.123 true 00:33:51.123 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:51.123 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:51.383 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:51.643 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:33:51.643 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:33:51.643 true 00:33:51.643 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:51.643 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:51.904 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:52.164 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:33:52.164 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:33:52.164 true 00:33:52.164 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:52.165 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.424 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:52.683 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:33:52.683 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:33:52.683 true 00:33:52.942 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:52.942 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.943 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:53.203 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:33:53.203 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:33:53.464 true 00:33:53.464 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:53.464 13:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:53.464 13:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:53.724 13:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:33:53.724 13:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:33:53.985 true 00:33:53.985 13:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:53.985 13:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:54.245 13:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:54.245 13:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:33:54.245 13:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:33:54.505 true 00:33:54.505 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:54.505 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:54.765 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:54.765 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:33:54.765 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:33:55.026 true 00:33:55.026 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:55.026 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:55.286 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:55.547 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:33:55.547 13:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:33:55.547 true 00:33:55.547 13:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:55.547 13:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:55.807 13:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:56.068 13:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:33:56.068 13:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:33:56.068 true 00:33:56.068 13:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:56.068 13:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:56.329 13:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:56.590 13:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:33:56.590 13:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:33:56.590 true 00:33:56.852 13:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:56.852 13:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:56.852 13:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:57.113 13:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:33:57.113 13:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:33:57.373 true 00:33:57.373 13:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:57.373 13:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:57.373 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:57.633 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:33:57.633 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:33:57.893 true 00:33:57.893 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:57.893 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:57.893 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:58.153 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:33:58.153 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:33:58.414 true 00:33:58.414 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:58.414 13:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:58.673 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:58.673 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:33:58.673 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:33:58.982 Initializing NVMe Controllers 00:33:58.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:58.982 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:33:58.982 Controller IO queue size 128, less than required. 00:33:58.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:58.982 WARNING: Some requested NVMe devices were skipped 00:33:58.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:58.982 Initialization complete. Launching workers. 00:33:58.982 ======================================================== 00:33:58.982 Latency(us) 00:33:58.982 Device Information : IOPS MiB/s Average min max 00:33:58.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29902.47 14.60 4281.64 1100.25 45448.99 00:33:58.982 ======================================================== 00:33:58.982 Total : 29902.47 14.60 4281.64 1100.25 45448.99 00:33:58.982 00:33:58.982 true 00:33:58.982 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1127531 00:33:58.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1127531) - No such process 00:33:58.982 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1127531 00:33:58.982 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:59.280 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:59.280 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:59.280 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:59.280 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:59.280 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:59.280 13:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:59.591 null0 00:33:59.591 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:59.591 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:59.591 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:59.591 null1 00:33:59.591 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:59.591 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:59.591 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:59.852 null2 00:33:59.852 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:59.852 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:59.852 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:34:00.114 null3 00:34:00.114 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:00.114 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:00.114 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:34:00.114 null4 00:34:00.114 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:00.114 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:00.114 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:34:00.375 null5 00:34:00.375 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:00.375 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:00.375 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:34:00.636 null6 00:34:00.636 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:00.636 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:34:00.637 null7 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1133973 1133975 1133978 1133980 1133983 1133987 1133990 1133993 00:34:00.637 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:34:00.638 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:00.638 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:00.638 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:00.900 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:00.900 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:00.900 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:00.900 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:00.900 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:00.900 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:00.900 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:00.900 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:01.161 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.161 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.161 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:01.161 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.161 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.161 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:01.161 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.162 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:01.423 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:01.423 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:01.423 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:01.423 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:01.423 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:01.423 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:01.423 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:01.423 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:01.423 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:01.685 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:01.686 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:01.686 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:01.948 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.211 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:02.474 13:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:02.474 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:02.474 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:02.474 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:02.474 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:02.474 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.474 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.474 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:02.735 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:02.997 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:03.258 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:03.518 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.518 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:03.779 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:04.040 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:04.301 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.561 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:04.821 rmmod nvme_tcp 00:34:04.821 rmmod nvme_fabrics 00:34:04.821 rmmod nvme_keyring 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1127151 ']' 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1127151 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1127151 ']' 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1127151 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127151 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127151' 00:34:04.821 killing process with pid 1127151 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1127151 00:34:04.821 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1127151 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.080 13:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.996 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.996 00:34:06.996 real 0m48.957s 00:34:06.996 user 3m3.051s 00:34:06.996 sys 0m22.336s 00:34:06.996 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.996 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:06.996 ************************************ 00:34:06.996 END TEST nvmf_ns_hotplug_stress 00:34:06.996 ************************************ 00:34:06.996 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:06.996 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:06.996 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.996 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:07.258 ************************************ 00:34:07.258 START TEST nvmf_delete_subsystem 00:34:07.258 ************************************ 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:07.258 * Looking for test storage... 00:34:07.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:07.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.258 --rc genhtml_branch_coverage=1 00:34:07.258 --rc genhtml_function_coverage=1 00:34:07.258 --rc genhtml_legend=1 00:34:07.258 --rc geninfo_all_blocks=1 00:34:07.258 --rc geninfo_unexecuted_blocks=1 00:34:07.258 00:34:07.258 ' 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:07.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.258 --rc genhtml_branch_coverage=1 00:34:07.258 --rc genhtml_function_coverage=1 00:34:07.258 --rc genhtml_legend=1 00:34:07.258 --rc geninfo_all_blocks=1 00:34:07.258 --rc geninfo_unexecuted_blocks=1 00:34:07.258 00:34:07.258 ' 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:07.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.258 --rc genhtml_branch_coverage=1 00:34:07.258 --rc genhtml_function_coverage=1 00:34:07.258 --rc genhtml_legend=1 00:34:07.258 --rc geninfo_all_blocks=1 00:34:07.258 --rc geninfo_unexecuted_blocks=1 00:34:07.258 00:34:07.258 ' 00:34:07.258 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:07.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.258 --rc genhtml_branch_coverage=1 00:34:07.258 --rc genhtml_function_coverage=1 00:34:07.258 --rc genhtml_legend=1 00:34:07.258 --rc geninfo_all_blocks=1 00:34:07.258 --rc geninfo_unexecuted_blocks=1 00:34:07.258 00:34:07.258 ' 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.259 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.521 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.521 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.521 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.521 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.521 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.521 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.521 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.522 13:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:15.666 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:15.666 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.666 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:15.667 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:15.667 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:15.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:34:15.667 00:34:15.667 --- 10.0.0.2 ping statistics --- 00:34:15.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.667 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:34:15.667 00:34:15.667 --- 10.0.0.1 ping statistics --- 00:34:15.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.667 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1139506 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1139506 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1139506 ']' 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:15.667 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.667 [2024-11-29 13:18:17.482335] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:15.667 [2024-11-29 13:18:17.483460] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:34:15.667 [2024-11-29 13:18:17.483511] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.667 [2024-11-29 13:18:17.584241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:15.667 [2024-11-29 13:18:17.635956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.667 [2024-11-29 13:18:17.636008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.667 [2024-11-29 13:18:17.636017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.667 [2024-11-29 13:18:17.636024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.667 [2024-11-29 13:18:17.636030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.667 [2024-11-29 13:18:17.637798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.667 [2024-11-29 13:18:17.637802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.667 [2024-11-29 13:18:17.716078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:15.667 [2024-11-29 13:18:17.716634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:15.667 [2024-11-29 13:18:17.716944] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:15.667 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.667 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:34:15.667 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:15.667 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:15.667 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.667 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.667 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.668 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.668 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.929 [2024-11-29 13:18:18.346811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.929 [2024-11-29 13:18:18.379322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.929 NULL1 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.929 Delay0 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1139772 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:34:15.929 13:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:15.929 [2024-11-29 13:18:18.504182] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:17.845 13:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:17.845 13:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.845 13:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 starting I/O failed: -6 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 [2024-11-29 13:18:20.638628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20172c0 is same with the state(6) to be set 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Write completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.106 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 starting I/O failed: -6 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 [2024-11-29 13:18:20.643532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d6400d490 is same with the state(6) to be set 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:18.107 Read completed with error (sct=0, sc=8) 00:34:18.107 Write completed with error (sct=0, sc=8) 00:34:19.052 [2024-11-29 13:18:21.602755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20189b0 is same with the state(6) to be set 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 [2024-11-29 13:18:21.642018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20174a0 is same with the state(6) to be set 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 [2024-11-29 13:18:21.642392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017860 is same with the state(6) to be set 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 [2024-11-29 13:18:21.644899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d6400d020 is same with the state(6) to be set 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Write completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 Read completed with error (sct=0, sc=8) 00:34:19.052 [2024-11-29 13:18:21.645007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d6400d7c0 is same with the state(6) to be set 00:34:19.052 Initializing NVMe Controllers 00:34:19.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:19.052 Controller IO queue size 128, less than required. 00:34:19.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:19.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:19.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:19.052 Initialization complete. Launching workers. 00:34:19.052 ======================================================== 00:34:19.052 Latency(us) 00:34:19.052 Device Information : IOPS MiB/s Average min max 00:34:19.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 159.21 0.08 919702.30 381.87 1008077.29 00:34:19.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.21 0.08 921198.33 304.72 1012313.85 00:34:19.052 ======================================================== 00:34:19.052 Total : 318.42 0.16 920450.32 304.72 1012313.85 00:34:19.052 00:34:19.052 [2024-11-29 13:18:21.645630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20189b0 (9): Bad file descriptor 00:34:19.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:34:19.052 13:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.052 13:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:34:19.052 13:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1139772 00:34:19.052 13:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1139772 00:34:19.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1139772) - No such process 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1139772 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1139772 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1139772 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:19.626 [2024-11-29 13:18:22.179147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1140441 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1140441 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:19.626 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:19.626 [2024-11-29 13:18:22.280943] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:20.198 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:20.198 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1140441 00:34:20.198 13:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:20.770 13:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:20.770 13:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1140441 00:34:20.770 13:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:21.343 13:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:21.343 13:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1140441 00:34:21.343 13:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:21.603 13:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:21.603 13:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1140441 00:34:21.603 13:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:22.174 13:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:22.174 13:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1140441 00:34:22.174 13:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:22.744 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:22.744 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1140441 00:34:22.744 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:22.744 Initializing NVMe Controllers 00:34:22.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:22.744 Controller IO queue size 128, less than required. 00:34:22.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:22.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:22.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:22.744 Initialization complete. Launching workers. 00:34:22.744 ======================================================== 00:34:22.744 Latency(us) 00:34:22.744 Device Information : IOPS MiB/s Average min max 00:34:22.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002312.39 1000250.51 1006099.47 00:34:22.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004068.02 1000366.48 1011077.50 00:34:22.744 ======================================================== 00:34:22.744 Total : 256.00 0.12 1003190.21 1000250.51 1011077.50 00:34:22.744 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1140441 00:34:23.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1140441) - No such process 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1140441 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.314 rmmod nvme_tcp 00:34:23.314 rmmod nvme_fabrics 00:34:23.314 rmmod nvme_keyring 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1139506 ']' 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1139506 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1139506 ']' 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1139506 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1139506 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1139506' 00:34:23.314 killing process with pid 1139506 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1139506 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1139506 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:23.314 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:25.858 00:34:25.858 real 0m18.340s 00:34:25.858 user 0m26.668s 00:34:25.858 sys 0m7.272s 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:25.858 ************************************ 00:34:25.858 END TEST nvmf_delete_subsystem 00:34:25.858 ************************************ 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:25.858 ************************************ 00:34:25.858 START TEST nvmf_host_management 00:34:25.858 ************************************ 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:25.858 * Looking for test storage... 00:34:25.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:34:25.858 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:25.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.859 --rc genhtml_branch_coverage=1 00:34:25.859 --rc genhtml_function_coverage=1 00:34:25.859 --rc genhtml_legend=1 00:34:25.859 --rc geninfo_all_blocks=1 00:34:25.859 --rc geninfo_unexecuted_blocks=1 00:34:25.859 00:34:25.859 ' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:25.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.859 --rc genhtml_branch_coverage=1 00:34:25.859 --rc genhtml_function_coverage=1 00:34:25.859 --rc genhtml_legend=1 00:34:25.859 --rc geninfo_all_blocks=1 00:34:25.859 --rc geninfo_unexecuted_blocks=1 00:34:25.859 00:34:25.859 ' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:25.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.859 --rc genhtml_branch_coverage=1 00:34:25.859 --rc genhtml_function_coverage=1 00:34:25.859 --rc genhtml_legend=1 00:34:25.859 --rc geninfo_all_blocks=1 00:34:25.859 --rc geninfo_unexecuted_blocks=1 00:34:25.859 00:34:25.859 ' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:25.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.859 --rc genhtml_branch_coverage=1 00:34:25.859 --rc genhtml_function_coverage=1 00:34:25.859 --rc genhtml_legend=1 00:34:25.859 --rc geninfo_all_blocks=1 00:34:25.859 --rc geninfo_unexecuted_blocks=1 00:34:25.859 00:34:25.859 ' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.859 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:25.860 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:25.860 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:25.860 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.860 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.860 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.860 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:25.860 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:25.860 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:34:25.860 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:34.004 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:34.004 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:34.004 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:34.004 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:34.004 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:34.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:34.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:34:34.005 00:34:34.005 --- 10.0.0.2 ping statistics --- 00:34:34.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.005 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:34:34.005 00:34:34.005 --- 10.0.0.1 ping statistics --- 00:34:34.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.005 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1145293 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1145293 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1145293 ']' 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.005 13:18:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:34.005 [2024-11-29 13:18:35.721071] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:34.005 [2024-11-29 13:18:35.722052] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:34:34.005 [2024-11-29 13:18:35.722088] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.005 [2024-11-29 13:18:35.815965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:34.005 [2024-11-29 13:18:35.853014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.005 [2024-11-29 13:18:35.853046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.005 [2024-11-29 13:18:35.853055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.005 [2024-11-29 13:18:35.853062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.005 [2024-11-29 13:18:35.853068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.005 [2024-11-29 13:18:35.854784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:34.005 [2024-11-29 13:18:35.854935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:34.005 [2024-11-29 13:18:35.855068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.005 [2024-11-29 13:18:35.855069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:34.005 [2024-11-29 13:18:35.911676] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:34.005 [2024-11-29 13:18:35.912839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:34.005 [2024-11-29 13:18:35.913000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:34.005 [2024-11-29 13:18:35.913627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:34.005 [2024-11-29 13:18:35.913663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:34.005 [2024-11-29 13:18:36.559839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:34.005 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:34:34.006 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:34:34.006 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.006 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:34.006 Malloc0 00:34:34.006 [2024-11-29 13:18:36.660180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.006 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.006 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:34:34.006 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.006 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1145491 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1145491 /var/tmp/bdevperf.sock 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1145491 ']' 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:34.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:34.267 { 00:34:34.267 "params": { 00:34:34.267 "name": "Nvme$subsystem", 00:34:34.267 "trtype": "$TEST_TRANSPORT", 00:34:34.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:34.267 "adrfam": "ipv4", 00:34:34.267 "trsvcid": "$NVMF_PORT", 00:34:34.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:34.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:34.267 "hdgst": ${hdgst:-false}, 00:34:34.267 "ddgst": ${ddgst:-false} 00:34:34.267 }, 00:34:34.267 "method": "bdev_nvme_attach_controller" 00:34:34.267 } 00:34:34.267 EOF 00:34:34.267 )") 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:34.267 13:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:34.267 "params": { 00:34:34.267 "name": "Nvme0", 00:34:34.267 "trtype": "tcp", 00:34:34.267 "traddr": "10.0.0.2", 00:34:34.267 "adrfam": "ipv4", 00:34:34.267 "trsvcid": "4420", 00:34:34.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:34.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:34.267 "hdgst": false, 00:34:34.267 "ddgst": false 00:34:34.267 }, 00:34:34.267 "method": "bdev_nvme_attach_controller" 00:34:34.267 }' 00:34:34.267 [2024-11-29 13:18:36.770377] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:34:34.267 [2024-11-29 13:18:36.770449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145491 ] 00:34:34.267 [2024-11-29 13:18:36.864665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.267 [2024-11-29 13:18:36.918365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.838 Running I/O for 10 seconds... 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.101 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:35.101 [2024-11-29 13:18:37.669727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.669998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.101 [2024-11-29 13:18:37.670066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235af20 is same with the state(6) to be set 00:34:35.102 [2024-11-29 13:18:37.670305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.102 [2024-11-29 13:18:37.670840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.102 [2024-11-29 13:18:37.670847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.670857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.670864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.670873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.670880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.670890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.670897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.670907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.670914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.670924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.670931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.670940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.670948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.670957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.670965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.670974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.670981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.670990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.670998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.103 [2024-11-29 13:18:37.671429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.103 [2024-11-29 13:18:37.671438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118eee0 is same with the state(6) to be set 00:34:35.103 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.103 [2024-11-29 13:18:37.672699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:35.103 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:35.103 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.103 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:35.103 task offset: 81920 on job bdev=Nvme0n1 fails 00:34:35.103 00:34:35.103 Latency(us) 00:34:35.103 [2024-11-29T12:18:37.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.103 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:35.103 Job: Nvme0n1 ended in about 0.46 seconds with error 00:34:35.103 Verification LBA range: start 0x0 length 0x400 00:34:35.103 Nvme0n1 : 0.46 1387.48 86.72 138.75 0.00 40740.36 4259.84 38229.33 00:34:35.103 [2024-11-29T12:18:37.783Z] =================================================================================================================== 00:34:35.103 [2024-11-29T12:18:37.784Z] Total : 1387.48 86.72 138.75 0.00 40740.36 4259.84 38229.33 00:34:35.104 [2024-11-29 13:18:37.674725] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:35.104 [2024-11-29 13:18:37.674750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf76010 (9): Bad file descriptor 00:34:35.104 [2024-11-29 13:18:37.676027] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:34:35.104 [2024-11-29 13:18:37.676100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:35.104 [2024-11-29 13:18:37.676131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.104 [2024-11-29 13:18:37.676145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:34:35.104 [2024-11-29 13:18:37.676154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:34:35.104 [2024-11-29 13:18:37.676179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.104 [2024-11-29 13:18:37.676186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf76010 00:34:35.104 [2024-11-29 13:18:37.676208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf76010 (9): Bad file descriptor 00:34:35.104 [2024-11-29 13:18:37.676222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:35.104 [2024-11-29 13:18:37.676229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:35.104 [2024-11-29 13:18:37.676238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:35.104 [2024-11-29 13:18:37.676246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:35.104 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.104 13:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1145491 00:34:36.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1145491) - No such process 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:36.044 { 00:34:36.044 "params": { 00:34:36.044 "name": "Nvme$subsystem", 00:34:36.044 "trtype": "$TEST_TRANSPORT", 00:34:36.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.044 "adrfam": "ipv4", 00:34:36.044 "trsvcid": "$NVMF_PORT", 00:34:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.044 "hdgst": ${hdgst:-false}, 00:34:36.044 "ddgst": ${ddgst:-false} 00:34:36.044 }, 00:34:36.044 "method": "bdev_nvme_attach_controller" 00:34:36.044 } 00:34:36.044 EOF 00:34:36.044 )") 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:36.044 13:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:36.044 "params": { 00:34:36.044 "name": "Nvme0", 00:34:36.044 "trtype": "tcp", 00:34:36.044 "traddr": "10.0.0.2", 00:34:36.044 "adrfam": "ipv4", 00:34:36.044 "trsvcid": "4420", 00:34:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.044 "hdgst": false, 00:34:36.044 "ddgst": false 00:34:36.044 }, 00:34:36.044 "method": "bdev_nvme_attach_controller" 00:34:36.044 }' 00:34:36.303 [2024-11-29 13:18:38.745237] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:34:36.303 [2024-11-29 13:18:38.745292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145845 ] 00:34:36.303 [2024-11-29 13:18:38.833233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.303 [2024-11-29 13:18:38.868602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.562 Running I/O for 1 seconds... 00:34:37.505 1678.00 IOPS, 104.88 MiB/s 00:34:37.505 Latency(us) 00:34:37.505 [2024-11-29T12:18:40.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.505 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:37.505 Verification LBA range: start 0x0 length 0x400 00:34:37.505 Nvme0n1 : 1.01 1728.44 108.03 0.00 0.00 36295.42 1590.61 35826.35 00:34:37.505 [2024-11-29T12:18:40.185Z] =================================================================================================================== 00:34:37.505 [2024-11-29T12:18:40.185Z] Total : 1728.44 108.03 0.00 0.00 36295.42 1590.61 35826.35 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:37.766 rmmod nvme_tcp 00:34:37.766 rmmod nvme_fabrics 00:34:37.766 rmmod nvme_keyring 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1145293 ']' 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1145293 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1145293 ']' 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1145293 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1145293 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1145293' 00:34:37.766 killing process with pid 1145293 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1145293 00:34:37.766 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1145293 00:34:37.766 [2024-11-29 13:18:40.429442] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:38.027 13:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.940 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:39.940 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:34:39.940 00:34:39.940 real 0m14.405s 00:34:39.940 user 0m18.958s 00:34:39.940 sys 0m7.349s 00:34:39.940 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:39.940 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:39.940 ************************************ 00:34:39.940 END TEST nvmf_host_management 00:34:39.940 ************************************ 00:34:39.940 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:39.940 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:39.940 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:39.940 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:39.940 ************************************ 00:34:39.940 START TEST nvmf_lvol 00:34:39.940 ************************************ 00:34:39.940 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:40.201 * Looking for test storage... 00:34:40.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:40.201 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:40.201 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:40.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.202 --rc genhtml_branch_coverage=1 00:34:40.202 --rc genhtml_function_coverage=1 00:34:40.202 --rc genhtml_legend=1 00:34:40.202 --rc geninfo_all_blocks=1 00:34:40.202 --rc geninfo_unexecuted_blocks=1 00:34:40.202 00:34:40.202 ' 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:40.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.202 --rc genhtml_branch_coverage=1 00:34:40.202 --rc genhtml_function_coverage=1 00:34:40.202 --rc genhtml_legend=1 00:34:40.202 --rc geninfo_all_blocks=1 00:34:40.202 --rc geninfo_unexecuted_blocks=1 00:34:40.202 00:34:40.202 ' 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:40.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.202 --rc genhtml_branch_coverage=1 00:34:40.202 --rc genhtml_function_coverage=1 00:34:40.202 --rc genhtml_legend=1 00:34:40.202 --rc geninfo_all_blocks=1 00:34:40.202 --rc geninfo_unexecuted_blocks=1 00:34:40.202 00:34:40.202 ' 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:40.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.202 --rc genhtml_branch_coverage=1 00:34:40.202 --rc genhtml_function_coverage=1 00:34:40.202 --rc genhtml_legend=1 00:34:40.202 --rc geninfo_all_blocks=1 00:34:40.202 --rc geninfo_unexecuted_blocks=1 00:34:40.202 00:34:40.202 ' 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:34:40.202 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:34:40.203 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:48.365 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:48.365 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.365 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:48.366 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:48.366 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:48.366 13:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:48.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:34:48.366 00:34:48.366 --- 10.0.0.2 ping statistics --- 00:34:48.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.366 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:34:48.366 00:34:48.366 --- 10.0.0.1 ping statistics --- 00:34:48.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.366 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1150394 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1150394 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1150394 ']' 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:48.366 13:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:48.366 [2024-11-29 13:18:50.177745] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:48.366 [2024-11-29 13:18:50.178897] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:34:48.366 [2024-11-29 13:18:50.178951] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.366 [2024-11-29 13:18:50.281634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:48.366 [2024-11-29 13:18:50.334705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.367 [2024-11-29 13:18:50.334761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.367 [2024-11-29 13:18:50.334770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.367 [2024-11-29 13:18:50.334777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.367 [2024-11-29 13:18:50.334783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.367 [2024-11-29 13:18:50.336893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.367 [2024-11-29 13:18:50.337052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.367 [2024-11-29 13:18:50.337052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.367 [2024-11-29 13:18:50.416250] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:48.367 [2024-11-29 13:18:50.417184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:48.367 [2024-11-29 13:18:50.417549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:48.367 [2024-11-29 13:18:50.417730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:48.367 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:48.367 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:34:48.367 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:48.367 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:48.367 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:48.626 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.626 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:48.626 [2024-11-29 13:18:51.217928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.626 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:48.884 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:48.884 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:49.143 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:49.143 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:49.402 13:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:49.662 13:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b5da68b2-5ca7-4a85-abb8-4c7891d900d1 00:34:49.662 13:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5da68b2-5ca7-4a85-abb8-4c7891d900d1 lvol 20 00:34:49.662 13:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3cba446a-25b9-40d3-880e-ef00590a3ba1 00:34:49.662 13:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:49.922 13:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3cba446a-25b9-40d3-880e-ef00590a3ba1 00:34:50.182 13:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:50.182 [2024-11-29 13:18:52.817780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.182 13:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:50.442 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1150886 00:34:50.443 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:50.443 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:51.387 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3cba446a-25b9-40d3-880e-ef00590a3ba1 MY_SNAPSHOT 00:34:51.647 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f72e1249-34eb-4db0-8c30-6d067e010ce3 00:34:51.647 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3cba446a-25b9-40d3-880e-ef00590a3ba1 30 00:34:51.906 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f72e1249-34eb-4db0-8c30-6d067e010ce3 MY_CLONE 00:34:52.166 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4a4fb4dc-d0a2-4958-847f-a274c7137211 00:34:52.166 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4a4fb4dc-d0a2-4958-847f-a274c7137211 00:34:52.737 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1150886 00:35:00.961 Initializing NVMe Controllers 00:35:00.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:00.961 Controller IO queue size 128, less than required. 00:35:00.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:00.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:00.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:00.961 Initialization complete. Launching workers. 00:35:00.961 ======================================================== 00:35:00.961 Latency(us) 00:35:00.961 Device Information : IOPS MiB/s Average min max 00:35:00.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15391.40 60.12 8317.37 673.16 67784.89 00:35:00.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15251.40 59.58 8393.49 4156.72 65177.08 00:35:00.961 ======================================================== 00:35:00.961 Total : 30642.80 119.70 8355.26 673.16 67784.89 00:35:00.961 00:35:00.961 13:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:01.221 13:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3cba446a-25b9-40d3-880e-ef00590a3ba1 00:35:01.221 13:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5da68b2-5ca7-4a85-abb8-4c7891d900d1 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.481 rmmod nvme_tcp 00:35:01.481 rmmod nvme_fabrics 00:35:01.481 rmmod nvme_keyring 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1150394 ']' 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1150394 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1150394 ']' 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1150394 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.481 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1150394 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1150394' 00:35:01.742 killing process with pid 1150394 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1150394 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1150394 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.742 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:04.287 00:35:04.287 real 0m23.772s 00:35:04.287 user 0m56.244s 00:35:04.287 sys 0m10.622s 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:04.287 ************************************ 00:35:04.287 END TEST nvmf_lvol 00:35:04.287 ************************************ 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:04.287 ************************************ 00:35:04.287 START TEST nvmf_lvs_grow 00:35:04.287 ************************************ 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:04.287 * Looking for test storage... 00:35:04.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:04.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.287 --rc genhtml_branch_coverage=1 00:35:04.287 --rc genhtml_function_coverage=1 00:35:04.287 --rc genhtml_legend=1 00:35:04.287 --rc geninfo_all_blocks=1 00:35:04.287 --rc geninfo_unexecuted_blocks=1 00:35:04.287 00:35:04.287 ' 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:04.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.287 --rc genhtml_branch_coverage=1 00:35:04.287 --rc genhtml_function_coverage=1 00:35:04.287 --rc genhtml_legend=1 00:35:04.287 --rc geninfo_all_blocks=1 00:35:04.287 --rc geninfo_unexecuted_blocks=1 00:35:04.287 00:35:04.287 ' 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:04.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.287 --rc genhtml_branch_coverage=1 00:35:04.287 --rc genhtml_function_coverage=1 00:35:04.287 --rc genhtml_legend=1 00:35:04.287 --rc geninfo_all_blocks=1 00:35:04.287 --rc geninfo_unexecuted_blocks=1 00:35:04.287 00:35:04.287 ' 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:04.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.287 --rc genhtml_branch_coverage=1 00:35:04.287 --rc genhtml_function_coverage=1 00:35:04.287 --rc genhtml_legend=1 00:35:04.287 --rc geninfo_all_blocks=1 00:35:04.287 --rc geninfo_unexecuted_blocks=1 00:35:04.287 00:35:04.287 ' 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:35:04.287 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:35:04.288 13:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:12.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:12.421 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:12.421 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:12.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:12.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:12.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:12.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:35:12.422 00:35:12.422 --- 10.0.0.2 ping statistics --- 00:35:12.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.422 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:12.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:12.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:35:12.422 00:35:12.422 --- 10.0.0.1 ping statistics --- 00:35:12.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.422 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:12.422 13:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1157226 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1157226 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1157226 ']' 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.422 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:12.423 [2024-11-29 13:19:14.067094] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:12.423 [2024-11-29 13:19:14.068186] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:35:12.423 [2024-11-29 13:19:14.068231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.423 [2024-11-29 13:19:14.168572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.423 [2024-11-29 13:19:14.219395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:12.423 [2024-11-29 13:19:14.219448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:12.423 [2024-11-29 13:19:14.219457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:12.423 [2024-11-29 13:19:14.219464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:12.423 [2024-11-29 13:19:14.219470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:12.423 [2024-11-29 13:19:14.220257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.423 [2024-11-29 13:19:14.296833] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:12.423 [2024-11-29 13:19:14.297102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:12.423 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.423 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:35:12.423 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:12.423 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:12.423 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:12.423 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:12.423 13:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:12.423 [2024-11-29 13:19:15.057097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.423 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:35:12.423 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:12.423 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:12.423 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:12.681 ************************************ 00:35:12.681 START TEST lvs_grow_clean 00:35:12.681 ************************************ 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:12.681 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:12.939 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:12.940 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:12.940 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:13.199 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:13.200 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:13.200 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 lvol 150 00:35:13.200 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934 00:35:13.200 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:13.200 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:13.461 [2024-11-29 13:19:16.012793] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:13.461 [2024-11-29 13:19:16.012963] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:13.461 true 00:35:13.461 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:13.461 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:13.721 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:13.721 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:13.721 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934 00:35:13.983 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:14.245 [2024-11-29 13:19:16.705362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1157791 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1157791 /var/tmp/bdevperf.sock 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1157791 ']' 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:14.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.245 13:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:14.506 [2024-11-29 13:19:16.943961] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:35:14.506 [2024-11-29 13:19:16.944048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157791 ] 00:35:14.506 [2024-11-29 13:19:17.040650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.506 [2024-11-29 13:19:17.077419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.077 13:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.077 13:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:35:15.077 13:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:15.337 Nvme0n1 00:35:15.337 13:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:15.598 [ 00:35:15.598 { 00:35:15.598 "name": "Nvme0n1", 00:35:15.598 "aliases": [ 00:35:15.598 "ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934" 00:35:15.598 ], 00:35:15.598 "product_name": "NVMe disk", 00:35:15.598 "block_size": 4096, 00:35:15.598 "num_blocks": 38912, 00:35:15.598 "uuid": "ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934", 00:35:15.598 "numa_id": 0, 00:35:15.598 "assigned_rate_limits": { 00:35:15.598 "rw_ios_per_sec": 0, 00:35:15.598 "rw_mbytes_per_sec": 0, 00:35:15.598 "r_mbytes_per_sec": 0, 00:35:15.598 "w_mbytes_per_sec": 0 00:35:15.598 }, 00:35:15.598 "claimed": false, 00:35:15.598 "zoned": false, 00:35:15.598 "supported_io_types": { 00:35:15.598 "read": true, 00:35:15.598 "write": true, 00:35:15.598 "unmap": true, 00:35:15.598 "flush": true, 00:35:15.598 "reset": true, 00:35:15.598 "nvme_admin": true, 00:35:15.598 "nvme_io": true, 00:35:15.598 "nvme_io_md": false, 00:35:15.598 "write_zeroes": true, 00:35:15.598 "zcopy": false, 00:35:15.598 "get_zone_info": false, 00:35:15.598 "zone_management": false, 00:35:15.598 "zone_append": false, 00:35:15.598 "compare": true, 00:35:15.598 "compare_and_write": true, 00:35:15.598 "abort": true, 00:35:15.598 "seek_hole": false, 00:35:15.598 "seek_data": false, 00:35:15.598 "copy": true, 00:35:15.598 "nvme_iov_md": false 00:35:15.598 }, 00:35:15.598 "memory_domains": [ 00:35:15.598 { 00:35:15.598 "dma_device_id": "system", 00:35:15.598 "dma_device_type": 1 00:35:15.598 } 00:35:15.598 ], 00:35:15.598 "driver_specific": { 00:35:15.598 "nvme": [ 00:35:15.598 { 00:35:15.598 "trid": { 00:35:15.598 "trtype": "TCP", 00:35:15.598 "adrfam": "IPv4", 00:35:15.598 "traddr": "10.0.0.2", 00:35:15.598 "trsvcid": "4420", 00:35:15.598 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:15.598 }, 00:35:15.598 "ctrlr_data": { 00:35:15.598 "cntlid": 1, 00:35:15.598 "vendor_id": "0x8086", 00:35:15.598 "model_number": "SPDK bdev Controller", 00:35:15.598 "serial_number": "SPDK0", 00:35:15.598 "firmware_revision": "25.01", 00:35:15.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.598 "oacs": { 00:35:15.598 "security": 0, 00:35:15.598 "format": 0, 00:35:15.598 "firmware": 0, 00:35:15.598 "ns_manage": 0 00:35:15.598 }, 00:35:15.598 "multi_ctrlr": true, 00:35:15.598 "ana_reporting": false 00:35:15.598 }, 00:35:15.598 "vs": { 00:35:15.598 "nvme_version": "1.3" 00:35:15.598 }, 00:35:15.598 "ns_data": { 00:35:15.598 "id": 1, 00:35:15.598 "can_share": true 00:35:15.598 } 00:35:15.598 } 00:35:15.598 ], 00:35:15.598 "mp_policy": "active_passive" 00:35:15.598 } 00:35:15.598 } 00:35:15.598 ] 00:35:15.598 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1157956 00:35:15.598 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:15.598 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:15.598 Running I/O for 10 seconds... 00:35:16.983 Latency(us) 00:35:16.983 [2024-11-29T12:19:19.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:16.983 Nvme0n1 : 1.00 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:35:16.983 [2024-11-29T12:19:19.663Z] =================================================================================================================== 00:35:16.983 [2024-11-29T12:19:19.663Z] Total : 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:35:16.983 00:35:17.557 13:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:17.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:17.557 Nvme0n1 : 2.00 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:35:17.557 [2024-11-29T12:19:20.237Z] =================================================================================================================== 00:35:17.557 [2024-11-29T12:19:20.237Z] Total : 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:35:17.557 00:35:17.821 true 00:35:17.821 13:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:17.821 13:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:18.082 13:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:18.082 13:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:18.082 13:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1157956 00:35:18.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:18.654 Nvme0n1 : 3.00 17060.33 66.64 0.00 0.00 0.00 0.00 0.00 00:35:18.654 [2024-11-29T12:19:21.334Z] =================================================================================================================== 00:35:18.654 [2024-11-29T12:19:21.334Z] Total : 17060.33 66.64 0.00 0.00 0.00 0.00 0.00 00:35:18.654 00:35:19.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:19.596 Nvme0n1 : 4.00 17192.75 67.16 0.00 0.00 0.00 0.00 0.00 00:35:19.596 [2024-11-29T12:19:22.276Z] =================================================================================================================== 00:35:19.596 [2024-11-29T12:19:22.276Z] Total : 17192.75 67.16 0.00 0.00 0.00 0.00 0.00 00:35:19.596 00:35:20.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:20.979 Nvme0n1 : 5.00 17881.60 69.85 0.00 0.00 0.00 0.00 0.00 00:35:20.979 [2024-11-29T12:19:23.659Z] =================================================================================================================== 00:35:20.979 [2024-11-29T12:19:23.659Z] Total : 17881.60 69.85 0.00 0.00 0.00 0.00 0.00 00:35:20.979 00:35:21.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:21.917 Nvme0n1 : 6.00 19028.83 74.33 0.00 0.00 0.00 0.00 0.00 00:35:21.917 [2024-11-29T12:19:24.597Z] =================================================================================================================== 00:35:21.917 [2024-11-29T12:19:24.597Z] Total : 19028.83 74.33 0.00 0.00 0.00 0.00 0.00 00:35:21.917 00:35:22.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:22.857 Nvme0n1 : 7.00 19848.29 77.53 0.00 0.00 0.00 0.00 0.00 00:35:22.857 [2024-11-29T12:19:25.537Z] =================================================================================================================== 00:35:22.857 [2024-11-29T12:19:25.537Z] Total : 19848.29 77.53 0.00 0.00 0.00 0.00 0.00 00:35:22.857 00:35:23.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.795 Nvme0n1 : 8.00 20462.88 79.93 0.00 0.00 0.00 0.00 0.00 00:35:23.795 [2024-11-29T12:19:26.475Z] =================================================================================================================== 00:35:23.795 [2024-11-29T12:19:26.475Z] Total : 20462.88 79.93 0.00 0.00 0.00 0.00 0.00 00:35:23.795 00:35:24.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:24.734 Nvme0n1 : 9.00 20948.00 81.83 0.00 0.00 0.00 0.00 0.00 00:35:24.734 [2024-11-29T12:19:27.414Z] =================================================================================================================== 00:35:24.735 [2024-11-29T12:19:27.415Z] Total : 20948.00 81.83 0.00 0.00 0.00 0.00 0.00 00:35:24.735 00:35:25.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:25.675 Nvme0n1 : 10.00 21334.60 83.34 0.00 0.00 0.00 0.00 0.00 00:35:25.675 [2024-11-29T12:19:28.355Z] =================================================================================================================== 00:35:25.675 [2024-11-29T12:19:28.355Z] Total : 21334.60 83.34 0.00 0.00 0.00 0.00 0.00 00:35:25.675 00:35:25.675 00:35:25.675 Latency(us) 00:35:25.675 [2024-11-29T12:19:28.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:25.675 Nvme0n1 : 10.00 21335.84 83.34 0.00 0.00 5995.92 2908.16 32768.00 00:35:25.675 [2024-11-29T12:19:28.355Z] =================================================================================================================== 00:35:25.675 [2024-11-29T12:19:28.355Z] Total : 21335.84 83.34 0.00 0.00 5995.92 2908.16 32768.00 00:35:25.675 { 00:35:25.675 "results": [ 00:35:25.675 { 00:35:25.675 "job": "Nvme0n1", 00:35:25.675 "core_mask": "0x2", 00:35:25.675 "workload": "randwrite", 00:35:25.675 "status": "finished", 00:35:25.675 "queue_depth": 128, 00:35:25.675 "io_size": 4096, 00:35:25.675 "runtime": 10.002418, 00:35:25.675 "iops": 21335.840993647736, 00:35:25.675 "mibps": 83.34312888143647, 00:35:25.675 "io_failed": 0, 00:35:25.675 "io_timeout": 0, 00:35:25.675 "avg_latency_us": 5995.922911203786, 00:35:25.675 "min_latency_us": 2908.16, 00:35:25.675 "max_latency_us": 32768.0 00:35:25.675 } 00:35:25.675 ], 00:35:25.675 "core_count": 1 00:35:25.675 } 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1157791 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1157791 ']' 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1157791 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1157791 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1157791' 00:35:25.675 killing process with pid 1157791 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1157791 00:35:25.675 Received shutdown signal, test time was about 10.000000 seconds 00:35:25.675 00:35:25.675 Latency(us) 00:35:25.675 [2024-11-29T12:19:28.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.675 [2024-11-29T12:19:28.355Z] =================================================================================================================== 00:35:25.675 [2024-11-29T12:19:28.355Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.675 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1157791 00:35:25.935 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:25.935 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:26.194 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:26.194 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:26.453 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:26.453 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:35:26.453 13:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:26.453 [2024-11-29 13:19:29.104843] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:26.714 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:26.715 request: 00:35:26.715 { 00:35:26.715 "uuid": "39c81b2b-aa5b-49b6-9765-6a4c0da96e01", 00:35:26.715 "method": "bdev_lvol_get_lvstores", 00:35:26.715 "req_id": 1 00:35:26.715 } 00:35:26.715 Got JSON-RPC error response 00:35:26.715 response: 00:35:26.715 { 00:35:26.715 "code": -19, 00:35:26.715 "message": "No such device" 00:35:26.715 } 00:35:26.715 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:35:26.715 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:26.715 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:26.715 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:26.715 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:26.976 aio_bdev 00:35:26.976 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934 00:35:26.976 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934 00:35:26.976 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:26.976 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:35:26.976 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:26.976 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:26.976 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:27.237 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934 -t 2000 00:35:27.237 [ 00:35:27.237 { 00:35:27.237 "name": "ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934", 00:35:27.237 "aliases": [ 00:35:27.237 "lvs/lvol" 00:35:27.237 ], 00:35:27.237 "product_name": "Logical Volume", 00:35:27.237 "block_size": 4096, 00:35:27.237 "num_blocks": 38912, 00:35:27.237 "uuid": "ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934", 00:35:27.237 "assigned_rate_limits": { 00:35:27.237 "rw_ios_per_sec": 0, 00:35:27.237 "rw_mbytes_per_sec": 0, 00:35:27.237 "r_mbytes_per_sec": 0, 00:35:27.237 "w_mbytes_per_sec": 0 00:35:27.237 }, 00:35:27.237 "claimed": false, 00:35:27.237 "zoned": false, 00:35:27.237 "supported_io_types": { 00:35:27.237 "read": true, 00:35:27.237 "write": true, 00:35:27.237 "unmap": true, 00:35:27.237 "flush": false, 00:35:27.237 "reset": true, 00:35:27.237 "nvme_admin": false, 00:35:27.237 "nvme_io": false, 00:35:27.237 "nvme_io_md": false, 00:35:27.237 "write_zeroes": true, 00:35:27.237 "zcopy": false, 00:35:27.237 "get_zone_info": false, 00:35:27.237 "zone_management": false, 00:35:27.237 "zone_append": false, 00:35:27.237 "compare": false, 00:35:27.237 "compare_and_write": false, 00:35:27.237 "abort": false, 00:35:27.237 "seek_hole": true, 00:35:27.237 "seek_data": true, 00:35:27.237 "copy": false, 00:35:27.237 "nvme_iov_md": false 00:35:27.237 }, 00:35:27.237 "driver_specific": { 00:35:27.237 "lvol": { 00:35:27.237 "lvol_store_uuid": "39c81b2b-aa5b-49b6-9765-6a4c0da96e01", 00:35:27.237 "base_bdev": "aio_bdev", 00:35:27.237 "thin_provision": false, 00:35:27.237 "num_allocated_clusters": 38, 00:35:27.237 "snapshot": false, 00:35:27.237 "clone": false, 00:35:27.237 "esnap_clone": false 00:35:27.237 } 00:35:27.237 } 00:35:27.237 } 00:35:27.237 ] 00:35:27.237 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:35:27.237 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:27.237 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:27.498 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:27.498 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:27.498 13:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:27.498 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:27.498 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac3e3d95-09bf-4e6e-9e3b-f1b767d0f934 00:35:27.758 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39c81b2b-aa5b-49b6-9765-6a4c0da96e01 00:35:28.018 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:28.018 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:28.018 00:35:28.018 real 0m15.579s 00:35:28.018 user 0m15.300s 00:35:28.018 sys 0m1.375s 00:35:28.018 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.018 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:28.018 ************************************ 00:35:28.018 END TEST lvs_grow_clean 00:35:28.018 ************************************ 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:28.279 ************************************ 00:35:28.279 START TEST lvs_grow_dirty 00:35:28.279 ************************************ 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:28.279 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:28.539 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:28.539 13:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:28.539 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:28.539 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:28.539 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:28.799 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:28.799 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:28.799 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d lvol 150 00:35:29.061 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4da58dd2-4af6-4a8e-ac6f-da1d032a875d 00:35:29.061 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:29.061 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:29.061 [2024-11-29 13:19:31.676774] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:29.061 [2024-11-29 13:19:31.676920] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:29.061 true 00:35:29.061 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:29.061 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:29.321 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:29.321 13:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:29.581 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4da58dd2-4af6-4a8e-ac6f-da1d032a875d 00:35:29.581 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:29.844 [2024-11-29 13:19:32.361310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.844 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:30.103 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1160712 00:35:30.103 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:30.103 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:30.103 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1160712 /var/tmp/bdevperf.sock 00:35:30.104 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1160712 ']' 00:35:30.104 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:30.104 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:30.104 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:30.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:30.104 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:30.104 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:30.104 [2024-11-29 13:19:32.594341] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:35:30.104 [2024-11-29 13:19:32.594399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160712 ] 00:35:30.104 [2024-11-29 13:19:32.677971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.104 [2024-11-29 13:19:32.707950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.044 13:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.044 13:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:31.044 13:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:31.305 Nvme0n1 00:35:31.305 13:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:31.305 [ 00:35:31.305 { 00:35:31.305 "name": "Nvme0n1", 00:35:31.305 "aliases": [ 00:35:31.305 "4da58dd2-4af6-4a8e-ac6f-da1d032a875d" 00:35:31.305 ], 00:35:31.305 "product_name": "NVMe disk", 00:35:31.305 "block_size": 4096, 00:35:31.305 "num_blocks": 38912, 00:35:31.305 "uuid": "4da58dd2-4af6-4a8e-ac6f-da1d032a875d", 00:35:31.305 "numa_id": 0, 00:35:31.305 "assigned_rate_limits": { 00:35:31.305 "rw_ios_per_sec": 0, 00:35:31.305 "rw_mbytes_per_sec": 0, 00:35:31.305 "r_mbytes_per_sec": 0, 00:35:31.305 "w_mbytes_per_sec": 0 00:35:31.305 }, 00:35:31.305 "claimed": false, 00:35:31.305 "zoned": false, 00:35:31.305 "supported_io_types": { 00:35:31.305 "read": true, 00:35:31.305 "write": true, 00:35:31.305 "unmap": true, 00:35:31.305 "flush": true, 00:35:31.305 "reset": true, 00:35:31.305 "nvme_admin": true, 00:35:31.305 "nvme_io": true, 00:35:31.305 "nvme_io_md": false, 00:35:31.305 "write_zeroes": true, 00:35:31.305 "zcopy": false, 00:35:31.305 "get_zone_info": false, 00:35:31.305 "zone_management": false, 00:35:31.305 "zone_append": false, 00:35:31.305 "compare": true, 00:35:31.305 "compare_and_write": true, 00:35:31.305 "abort": true, 00:35:31.305 "seek_hole": false, 00:35:31.305 "seek_data": false, 00:35:31.305 "copy": true, 00:35:31.305 "nvme_iov_md": false 00:35:31.305 }, 00:35:31.305 "memory_domains": [ 00:35:31.305 { 00:35:31.305 "dma_device_id": "system", 00:35:31.305 "dma_device_type": 1 00:35:31.305 } 00:35:31.305 ], 00:35:31.305 "driver_specific": { 00:35:31.305 "nvme": [ 00:35:31.305 { 00:35:31.305 "trid": { 00:35:31.305 "trtype": "TCP", 00:35:31.305 "adrfam": "IPv4", 00:35:31.305 "traddr": "10.0.0.2", 00:35:31.305 "trsvcid": "4420", 00:35:31.305 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:31.305 }, 00:35:31.305 "ctrlr_data": { 00:35:31.305 "cntlid": 1, 00:35:31.305 "vendor_id": "0x8086", 00:35:31.305 "model_number": "SPDK bdev Controller", 00:35:31.305 "serial_number": "SPDK0", 00:35:31.305 "firmware_revision": "25.01", 00:35:31.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:31.305 "oacs": { 00:35:31.305 "security": 0, 00:35:31.305 "format": 0, 00:35:31.305 "firmware": 0, 00:35:31.305 "ns_manage": 0 00:35:31.305 }, 00:35:31.305 "multi_ctrlr": true, 00:35:31.305 "ana_reporting": false 00:35:31.305 }, 00:35:31.305 "vs": { 00:35:31.305 "nvme_version": "1.3" 00:35:31.305 }, 00:35:31.305 "ns_data": { 00:35:31.305 "id": 1, 00:35:31.305 "can_share": true 00:35:31.305 } 00:35:31.305 } 00:35:31.305 ], 00:35:31.305 "mp_policy": "active_passive" 00:35:31.305 } 00:35:31.305 } 00:35:31.305 ] 00:35:31.305 13:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1161050 00:35:31.305 13:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:31.305 13:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:31.566 Running I/O for 10 seconds... 00:35:32.506 Latency(us) 00:35:32.506 [2024-11-29T12:19:35.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:32.506 Nvme0n1 : 1.00 16965.00 66.27 0.00 0.00 0.00 0.00 0.00 00:35:32.506 [2024-11-29T12:19:35.186Z] =================================================================================================================== 00:35:32.506 [2024-11-29T12:19:35.186Z] Total : 16965.00 66.27 0.00 0.00 0.00 0.00 0.00 00:35:32.506 00:35:33.444 13:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:33.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:33.444 Nvme0n1 : 2.00 17213.50 67.24 0.00 0.00 0.00 0.00 0.00 00:35:33.444 [2024-11-29T12:19:36.124Z] =================================================================================================================== 00:35:33.444 [2024-11-29T12:19:36.124Z] Total : 17213.50 67.24 0.00 0.00 0.00 0.00 0.00 00:35:33.444 00:35:33.444 true 00:35:33.444 13:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:33.444 13:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:33.705 13:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:33.705 13:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:33.705 13:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1161050 00:35:34.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:34.648 Nvme0n1 : 3.00 17317.67 67.65 0.00 0.00 0.00 0.00 0.00 00:35:34.648 [2024-11-29T12:19:37.328Z] =================================================================================================================== 00:35:34.648 [2024-11-29T12:19:37.328Z] Total : 17317.67 67.65 0.00 0.00 0.00 0.00 0.00 00:35:34.648 00:35:35.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:35.588 Nvme0n1 : 4.00 17369.75 67.85 0.00 0.00 0.00 0.00 0.00 00:35:35.588 [2024-11-29T12:19:38.268Z] =================================================================================================================== 00:35:35.588 [2024-11-29T12:19:38.268Z] Total : 17369.75 67.85 0.00 0.00 0.00 0.00 0.00 00:35:35.588 00:35:36.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:36.530 Nvme0n1 : 5.00 18572.80 72.55 0.00 0.00 0.00 0.00 0.00 00:35:36.530 [2024-11-29T12:19:39.210Z] =================================================================================================================== 00:35:36.530 [2024-11-29T12:19:39.210Z] Total : 18572.80 72.55 0.00 0.00 0.00 0.00 0.00 00:35:36.530 00:35:37.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:37.468 Nvme0n1 : 6.00 19604.83 76.58 0.00 0.00 0.00 0.00 0.00 00:35:37.468 [2024-11-29T12:19:40.148Z] =================================================================================================================== 00:35:37.468 [2024-11-29T12:19:40.148Z] Total : 19604.83 76.58 0.00 0.00 0.00 0.00 0.00 00:35:37.468 00:35:38.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:38.410 Nvme0n1 : 7.00 20342.00 79.46 0.00 0.00 0.00 0.00 0.00 00:35:38.410 [2024-11-29T12:19:41.090Z] =================================================================================================================== 00:35:38.410 [2024-11-29T12:19:41.090Z] Total : 20342.00 79.46 0.00 0.00 0.00 0.00 0.00 00:35:38.410 00:35:39.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:39.365 Nvme0n1 : 8.00 20894.88 81.62 0.00 0.00 0.00 0.00 0.00 00:35:39.365 [2024-11-29T12:19:42.045Z] =================================================================================================================== 00:35:39.365 [2024-11-29T12:19:42.045Z] Total : 20894.88 81.62 0.00 0.00 0.00 0.00 0.00 00:35:39.365 00:35:40.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:40.749 Nvme0n1 : 9.00 21339.00 83.36 0.00 0.00 0.00 0.00 0.00 00:35:40.749 [2024-11-29T12:19:43.429Z] =================================================================================================================== 00:35:40.749 [2024-11-29T12:19:43.429Z] Total : 21339.00 83.36 0.00 0.00 0.00 0.00 0.00 00:35:40.749 00:35:41.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:41.688 Nvme0n1 : 10.00 21681.60 84.69 0.00 0.00 0.00 0.00 0.00 00:35:41.688 [2024-11-29T12:19:44.368Z] =================================================================================================================== 00:35:41.688 [2024-11-29T12:19:44.368Z] Total : 21681.60 84.69 0.00 0.00 0.00 0.00 0.00 00:35:41.688 00:35:41.688 00:35:41.688 Latency(us) 00:35:41.688 [2024-11-29T12:19:44.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:41.688 Nvme0n1 : 10.00 21688.52 84.72 0.00 0.00 5898.90 3003.73 29491.20 00:35:41.688 [2024-11-29T12:19:44.368Z] =================================================================================================================== 00:35:41.688 [2024-11-29T12:19:44.368Z] Total : 21688.52 84.72 0.00 0.00 5898.90 3003.73 29491.20 00:35:41.688 { 00:35:41.688 "results": [ 00:35:41.688 { 00:35:41.688 "job": "Nvme0n1", 00:35:41.688 "core_mask": "0x2", 00:35:41.688 "workload": "randwrite", 00:35:41.688 "status": "finished", 00:35:41.688 "queue_depth": 128, 00:35:41.688 "io_size": 4096, 00:35:41.688 "runtime": 10.00271, 00:35:41.688 "iops": 21688.522410426773, 00:35:41.688 "mibps": 84.72079066572958, 00:35:41.688 "io_failed": 0, 00:35:41.688 "io_timeout": 0, 00:35:41.688 "avg_latency_us": 5898.902263195418, 00:35:41.688 "min_latency_us": 3003.733333333333, 00:35:41.688 "max_latency_us": 29491.2 00:35:41.688 } 00:35:41.688 ], 00:35:41.688 "core_count": 1 00:35:41.688 } 00:35:41.688 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1160712 00:35:41.688 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1160712 ']' 00:35:41.688 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1160712 00:35:41.688 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:35:41.688 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.689 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1160712 00:35:41.689 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:41.689 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:41.689 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1160712' 00:35:41.689 killing process with pid 1160712 00:35:41.689 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1160712 00:35:41.689 Received shutdown signal, test time was about 10.000000 seconds 00:35:41.689 00:35:41.689 Latency(us) 00:35:41.689 [2024-11-29T12:19:44.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.689 [2024-11-29T12:19:44.369Z] =================================================================================================================== 00:35:41.689 [2024-11-29T12:19:44.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:41.689 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1160712 00:35:41.689 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:41.948 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:41.949 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:41.949 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1157226 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1157226 00:35:42.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1157226 Killed "${NVMF_APP[@]}" "$@" 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1163069 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1163069 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1163069 ']' 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:42.208 13:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:42.208 [2024-11-29 13:19:44.845012] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:42.208 [2024-11-29 13:19:44.845969] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:35:42.208 [2024-11-29 13:19:44.846008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.468 [2024-11-29 13:19:44.937205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.468 [2024-11-29 13:19:44.966118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:42.468 [2024-11-29 13:19:44.966144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:42.468 [2024-11-29 13:19:44.966150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:42.468 [2024-11-29 13:19:44.966155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:42.468 [2024-11-29 13:19:44.966163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:42.468 [2024-11-29 13:19:44.966610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.468 [2024-11-29 13:19:45.017145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:42.468 [2024-11-29 13:19:45.017340] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:43.037 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:43.037 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:43.037 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:43.037 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:43.038 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:43.038 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:43.038 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:43.298 [2024-11-29 13:19:45.840984] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:43.298 [2024-11-29 13:19:45.841246] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:43.298 [2024-11-29 13:19:45.841358] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:43.298 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:35:43.298 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4da58dd2-4af6-4a8e-ac6f-da1d032a875d 00:35:43.298 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4da58dd2-4af6-4a8e-ac6f-da1d032a875d 00:35:43.298 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:43.298 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:43.298 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:43.298 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:43.298 13:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:43.559 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4da58dd2-4af6-4a8e-ac6f-da1d032a875d -t 2000 00:35:43.560 [ 00:35:43.560 { 00:35:43.560 "name": "4da58dd2-4af6-4a8e-ac6f-da1d032a875d", 00:35:43.560 "aliases": [ 00:35:43.560 "lvs/lvol" 00:35:43.560 ], 00:35:43.560 "product_name": "Logical Volume", 00:35:43.560 "block_size": 4096, 00:35:43.560 "num_blocks": 38912, 00:35:43.560 "uuid": "4da58dd2-4af6-4a8e-ac6f-da1d032a875d", 00:35:43.560 "assigned_rate_limits": { 00:35:43.560 "rw_ios_per_sec": 0, 00:35:43.560 "rw_mbytes_per_sec": 0, 00:35:43.560 "r_mbytes_per_sec": 0, 00:35:43.560 "w_mbytes_per_sec": 0 00:35:43.560 }, 00:35:43.560 "claimed": false, 00:35:43.560 "zoned": false, 00:35:43.560 "supported_io_types": { 00:35:43.560 "read": true, 00:35:43.560 "write": true, 00:35:43.560 "unmap": true, 00:35:43.560 "flush": false, 00:35:43.560 "reset": true, 00:35:43.560 "nvme_admin": false, 00:35:43.560 "nvme_io": false, 00:35:43.560 "nvme_io_md": false, 00:35:43.560 "write_zeroes": true, 00:35:43.560 "zcopy": false, 00:35:43.560 "get_zone_info": false, 00:35:43.560 "zone_management": false, 00:35:43.560 "zone_append": false, 00:35:43.560 "compare": false, 00:35:43.560 "compare_and_write": false, 00:35:43.560 "abort": false, 00:35:43.560 "seek_hole": true, 00:35:43.560 "seek_data": true, 00:35:43.560 "copy": false, 00:35:43.560 "nvme_iov_md": false 00:35:43.560 }, 00:35:43.560 "driver_specific": { 00:35:43.560 "lvol": { 00:35:43.560 "lvol_store_uuid": "bcc731ed-9bdf-464e-b73c-ca87d2f01e9d", 00:35:43.560 "base_bdev": "aio_bdev", 00:35:43.560 "thin_provision": false, 00:35:43.560 "num_allocated_clusters": 38, 00:35:43.560 "snapshot": false, 00:35:43.560 "clone": false, 00:35:43.560 "esnap_clone": false 00:35:43.560 } 00:35:43.560 } 00:35:43.560 } 00:35:43.560 ] 00:35:43.560 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:43.560 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:43.560 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:35:43.820 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:35:43.820 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:43.820 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:44.079 [2024-11-29 13:19:46.691071] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:44.079 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:44.339 request: 00:35:44.339 { 00:35:44.339 "uuid": "bcc731ed-9bdf-464e-b73c-ca87d2f01e9d", 00:35:44.339 "method": "bdev_lvol_get_lvstores", 00:35:44.339 "req_id": 1 00:35:44.339 } 00:35:44.339 Got JSON-RPC error response 00:35:44.339 response: 00:35:44.339 { 00:35:44.339 "code": -19, 00:35:44.339 "message": "No such device" 00:35:44.339 } 00:35:44.339 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:35:44.339 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:44.339 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:44.339 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:44.339 13:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:44.599 aio_bdev 00:35:44.599 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4da58dd2-4af6-4a8e-ac6f-da1d032a875d 00:35:44.599 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4da58dd2-4af6-4a8e-ac6f-da1d032a875d 00:35:44.599 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:44.599 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:44.599 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:44.599 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:44.599 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:44.599 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4da58dd2-4af6-4a8e-ac6f-da1d032a875d -t 2000 00:35:44.859 [ 00:35:44.859 { 00:35:44.859 "name": "4da58dd2-4af6-4a8e-ac6f-da1d032a875d", 00:35:44.859 "aliases": [ 00:35:44.859 "lvs/lvol" 00:35:44.859 ], 00:35:44.859 "product_name": "Logical Volume", 00:35:44.859 "block_size": 4096, 00:35:44.859 "num_blocks": 38912, 00:35:44.859 "uuid": "4da58dd2-4af6-4a8e-ac6f-da1d032a875d", 00:35:44.859 "assigned_rate_limits": { 00:35:44.859 "rw_ios_per_sec": 0, 00:35:44.859 "rw_mbytes_per_sec": 0, 00:35:44.859 "r_mbytes_per_sec": 0, 00:35:44.859 "w_mbytes_per_sec": 0 00:35:44.859 }, 00:35:44.859 "claimed": false, 00:35:44.859 "zoned": false, 00:35:44.859 "supported_io_types": { 00:35:44.859 "read": true, 00:35:44.859 "write": true, 00:35:44.859 "unmap": true, 00:35:44.859 "flush": false, 00:35:44.859 "reset": true, 00:35:44.859 "nvme_admin": false, 00:35:44.859 "nvme_io": false, 00:35:44.859 "nvme_io_md": false, 00:35:44.859 "write_zeroes": true, 00:35:44.859 "zcopy": false, 00:35:44.859 "get_zone_info": false, 00:35:44.859 "zone_management": false, 00:35:44.859 "zone_append": false, 00:35:44.859 "compare": false, 00:35:44.859 "compare_and_write": false, 00:35:44.859 "abort": false, 00:35:44.859 "seek_hole": true, 00:35:44.859 "seek_data": true, 00:35:44.859 "copy": false, 00:35:44.859 "nvme_iov_md": false 00:35:44.859 }, 00:35:44.859 "driver_specific": { 00:35:44.859 "lvol": { 00:35:44.859 "lvol_store_uuid": "bcc731ed-9bdf-464e-b73c-ca87d2f01e9d", 00:35:44.859 "base_bdev": "aio_bdev", 00:35:44.859 "thin_provision": false, 00:35:44.859 "num_allocated_clusters": 38, 00:35:44.859 "snapshot": false, 00:35:44.859 "clone": false, 00:35:44.859 "esnap_clone": false 00:35:44.859 } 00:35:44.859 } 00:35:44.859 } 00:35:44.859 ] 00:35:44.859 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:44.859 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:44.859 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:45.120 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:45.120 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:45.120 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:45.120 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:45.120 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4da58dd2-4af6-4a8e-ac6f-da1d032a875d 00:35:45.381 13:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bcc731ed-9bdf-464e-b73c-ca87d2f01e9d 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:45.642 00:35:45.642 real 0m17.482s 00:35:45.642 user 0m35.526s 00:35:45.642 sys 0m2.951s 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:45.642 ************************************ 00:35:45.642 END TEST lvs_grow_dirty 00:35:45.642 ************************************ 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:35:45.642 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:45.642 nvmf_trace.0 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:45.902 rmmod nvme_tcp 00:35:45.902 rmmod nvme_fabrics 00:35:45.902 rmmod nvme_keyring 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1163069 ']' 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1163069 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1163069 ']' 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1163069 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1163069 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1163069' 00:35:45.902 killing process with pid 1163069 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1163069 00:35:45.902 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1163069 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.162 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.207 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:48.207 00:35:48.207 real 0m44.228s 00:35:48.207 user 0m53.719s 00:35:48.207 sys 0m10.331s 00:35:48.207 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.207 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:48.207 ************************************ 00:35:48.207 END TEST nvmf_lvs_grow 00:35:48.207 ************************************ 00:35:48.207 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:48.207 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:48.207 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.207 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:48.207 ************************************ 00:35:48.207 START TEST nvmf_bdev_io_wait 00:35:48.207 ************************************ 00:35:48.207 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:48.207 * Looking for test storage... 00:35:48.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:48.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.469 --rc genhtml_branch_coverage=1 00:35:48.469 --rc genhtml_function_coverage=1 00:35:48.469 --rc genhtml_legend=1 00:35:48.469 --rc geninfo_all_blocks=1 00:35:48.469 --rc geninfo_unexecuted_blocks=1 00:35:48.469 00:35:48.469 ' 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:48.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.469 --rc genhtml_branch_coverage=1 00:35:48.469 --rc genhtml_function_coverage=1 00:35:48.469 --rc genhtml_legend=1 00:35:48.469 --rc geninfo_all_blocks=1 00:35:48.469 --rc geninfo_unexecuted_blocks=1 00:35:48.469 00:35:48.469 ' 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:48.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.469 --rc genhtml_branch_coverage=1 00:35:48.469 --rc genhtml_function_coverage=1 00:35:48.469 --rc genhtml_legend=1 00:35:48.469 --rc geninfo_all_blocks=1 00:35:48.469 --rc geninfo_unexecuted_blocks=1 00:35:48.469 00:35:48.469 ' 00:35:48.469 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:48.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:48.469 --rc genhtml_branch_coverage=1 00:35:48.469 --rc genhtml_function_coverage=1 00:35:48.469 --rc genhtml_legend=1 00:35:48.470 --rc geninfo_all_blocks=1 00:35:48.470 --rc geninfo_unexecuted_blocks=1 00:35:48.470 00:35:48.470 ' 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:48.470 13:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:35:48.470 13:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:56.611 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:56.611 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:56.611 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:56.611 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:56.611 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:56.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:56.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:35:56.612 00:35:56.612 --- 10.0.0.2 ping statistics --- 00:35:56.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.612 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:56.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:56.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:35:56.612 00:35:56.612 --- 10.0.0.1 ping statistics --- 00:35:56.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.612 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1168094 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1168094 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1168094 ']' 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:56.612 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:56.612 [2024-11-29 13:19:58.569311] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:56.612 [2024-11-29 13:19:58.570435] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:35:56.612 [2024-11-29 13:19:58.570485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:56.612 [2024-11-29 13:19:58.657290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:56.612 [2024-11-29 13:19:58.712290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:56.612 [2024-11-29 13:19:58.712351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:56.612 [2024-11-29 13:19:58.712360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:56.612 [2024-11-29 13:19:58.712367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:56.612 [2024-11-29 13:19:58.712373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:56.612 [2024-11-29 13:19:58.717190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.612 [2024-11-29 13:19:58.717293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:56.612 [2024-11-29 13:19:58.717615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:56.612 [2024-11-29 13:19:58.717618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.612 [2024-11-29 13:19:58.718124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:56.873 [2024-11-29 13:19:59.502804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:56.873 [2024-11-29 13:19:59.503037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:56.873 [2024-11-29 13:19:59.503411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:56.873 [2024-11-29 13:19:59.503562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:56.873 [2024-11-29 13:19:59.514338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.873 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:57.135 Malloc0 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:57.135 [2024-11-29 13:19:59.582907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1168170 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1168172 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:57.135 { 00:35:57.135 "params": { 00:35:57.135 "name": "Nvme$subsystem", 00:35:57.135 "trtype": "$TEST_TRANSPORT", 00:35:57.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.135 "adrfam": "ipv4", 00:35:57.135 "trsvcid": "$NVMF_PORT", 00:35:57.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.135 "hdgst": ${hdgst:-false}, 00:35:57.135 "ddgst": ${ddgst:-false} 00:35:57.135 }, 00:35:57.135 "method": "bdev_nvme_attach_controller" 00:35:57.135 } 00:35:57.135 EOF 00:35:57.135 )") 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1168174 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:57.135 { 00:35:57.135 "params": { 00:35:57.135 "name": "Nvme$subsystem", 00:35:57.135 "trtype": "$TEST_TRANSPORT", 00:35:57.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.135 "adrfam": "ipv4", 00:35:57.135 "trsvcid": "$NVMF_PORT", 00:35:57.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.135 "hdgst": ${hdgst:-false}, 00:35:57.135 "ddgst": ${ddgst:-false} 00:35:57.135 }, 00:35:57.135 "method": "bdev_nvme_attach_controller" 00:35:57.135 } 00:35:57.135 EOF 00:35:57.135 )") 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1168177 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:57.135 { 00:35:57.135 "params": { 00:35:57.135 "name": "Nvme$subsystem", 00:35:57.135 "trtype": "$TEST_TRANSPORT", 00:35:57.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.135 "adrfam": "ipv4", 00:35:57.135 "trsvcid": "$NVMF_PORT", 00:35:57.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.135 "hdgst": ${hdgst:-false}, 00:35:57.135 "ddgst": ${ddgst:-false} 00:35:57.135 }, 00:35:57.135 "method": "bdev_nvme_attach_controller" 00:35:57.135 } 00:35:57.135 EOF 00:35:57.135 )") 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:57.135 { 00:35:57.135 "params": { 00:35:57.135 "name": "Nvme$subsystem", 00:35:57.135 "trtype": "$TEST_TRANSPORT", 00:35:57.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.135 "adrfam": "ipv4", 00:35:57.135 "trsvcid": "$NVMF_PORT", 00:35:57.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.135 "hdgst": ${hdgst:-false}, 00:35:57.135 "ddgst": ${ddgst:-false} 00:35:57.135 }, 00:35:57.135 "method": "bdev_nvme_attach_controller" 00:35:57.135 } 00:35:57.135 EOF 00:35:57.135 )") 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1168170 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:57.135 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:57.136 "params": { 00:35:57.136 "name": "Nvme1", 00:35:57.136 "trtype": "tcp", 00:35:57.136 "traddr": "10.0.0.2", 00:35:57.136 "adrfam": "ipv4", 00:35:57.136 "trsvcid": "4420", 00:35:57.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:57.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:57.136 "hdgst": false, 00:35:57.136 "ddgst": false 00:35:57.136 }, 00:35:57.136 "method": "bdev_nvme_attach_controller" 00:35:57.136 }' 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:57.136 "params": { 00:35:57.136 "name": "Nvme1", 00:35:57.136 "trtype": "tcp", 00:35:57.136 "traddr": "10.0.0.2", 00:35:57.136 "adrfam": "ipv4", 00:35:57.136 "trsvcid": "4420", 00:35:57.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:57.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:57.136 "hdgst": false, 00:35:57.136 "ddgst": false 00:35:57.136 }, 00:35:57.136 "method": "bdev_nvme_attach_controller" 00:35:57.136 }' 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:57.136 "params": { 00:35:57.136 "name": "Nvme1", 00:35:57.136 "trtype": "tcp", 00:35:57.136 "traddr": "10.0.0.2", 00:35:57.136 "adrfam": "ipv4", 00:35:57.136 "trsvcid": "4420", 00:35:57.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:57.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:57.136 "hdgst": false, 00:35:57.136 "ddgst": false 00:35:57.136 }, 00:35:57.136 "method": "bdev_nvme_attach_controller" 00:35:57.136 }' 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:57.136 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:57.136 "params": { 00:35:57.136 "name": "Nvme1", 00:35:57.136 "trtype": "tcp", 00:35:57.136 "traddr": "10.0.0.2", 00:35:57.136 "adrfam": "ipv4", 00:35:57.136 "trsvcid": "4420", 00:35:57.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:57.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:57.136 "hdgst": false, 00:35:57.136 "ddgst": false 00:35:57.136 }, 00:35:57.136 "method": "bdev_nvme_attach_controller" 00:35:57.136 }' 00:35:57.136 [2024-11-29 13:19:59.638817] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:35:57.136 [2024-11-29 13:19:59.638879] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:57.136 [2024-11-29 13:19:59.638942] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:35:57.136 [2024-11-29 13:19:59.638992] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:35:57.136 [2024-11-29 13:19:59.639520] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:35:57.136 [2024-11-29 13:19:59.639568] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:57.136 [2024-11-29 13:19:59.643686] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:35:57.136 [2024-11-29 13:19:59.643739] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:57.397 [2024-11-29 13:19:59.852105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.397 [2024-11-29 13:19:59.893040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:57.397 [2024-11-29 13:19:59.943491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.397 [2024-11-29 13:19:59.984128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:57.397 [2024-11-29 13:20:00.037846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.660 [2024-11-29 13:20:00.081859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:57.660 [2024-11-29 13:20:00.112586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.660 [2024-11-29 13:20:00.152776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:57.660 Running I/O for 1 seconds... 00:35:57.660 Running I/O for 1 seconds... 00:35:57.660 Running I/O for 1 seconds... 00:35:57.922 Running I/O for 1 seconds... 00:35:58.866 11691.00 IOPS, 45.67 MiB/s 00:35:58.866 Latency(us) 00:35:58.866 [2024-11-29T12:20:01.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.866 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:58.866 Nvme1n1 : 1.01 11752.82 45.91 0.00 0.00 10854.73 5024.43 12288.00 00:35:58.866 [2024-11-29T12:20:01.546Z] =================================================================================================================== 00:35:58.866 [2024-11-29T12:20:01.546Z] Total : 11752.82 45.91 0.00 0.00 10854.73 5024.43 12288.00 00:35:58.866 9472.00 IOPS, 37.00 MiB/s [2024-11-29T12:20:01.546Z] 9521.00 IOPS, 37.19 MiB/s 00:35:58.866 Latency(us) 00:35:58.866 [2024-11-29T12:20:01.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.866 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:58.866 Nvme1n1 : 1.01 9545.34 37.29 0.00 0.00 13354.61 2293.76 20753.07 00:35:58.866 [2024-11-29T12:20:01.546Z] =================================================================================================================== 00:35:58.866 [2024-11-29T12:20:01.546Z] Total : 9545.34 37.29 0.00 0.00 13354.61 2293.76 20753.07 00:35:58.866 00:35:58.866 Latency(us) 00:35:58.866 [2024-11-29T12:20:01.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.866 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:58.866 Nvme1n1 : 1.01 9581.40 37.43 0.00 0.00 13310.99 5843.63 20643.84 00:35:58.866 [2024-11-29T12:20:01.546Z] =================================================================================================================== 00:35:58.866 [2024-11-29T12:20:01.546Z] Total : 9581.40 37.43 0.00 0.00 13310.99 5843.63 20643.84 00:35:58.866 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1168172 00:35:58.866 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1168174 00:35:58.866 180392.00 IOPS, 704.66 MiB/s 00:35:58.866 Latency(us) 00:35:58.866 [2024-11-29T12:20:01.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.866 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:58.866 Nvme1n1 : 1.00 180034.89 703.26 0.00 0.00 706.91 298.67 1979.73 00:35:58.866 [2024-11-29T12:20:01.546Z] =================================================================================================================== 00:35:58.866 [2024-11-29T12:20:01.546Z] Total : 180034.89 703.26 0.00 0.00 706.91 298.67 1979.73 00:35:58.866 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1168177 00:35:58.866 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:58.866 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.866 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.127 rmmod nvme_tcp 00:35:59.127 rmmod nvme_fabrics 00:35:59.127 rmmod nvme_keyring 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1168094 ']' 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1168094 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1168094 ']' 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1168094 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168094 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168094' 00:35:59.127 killing process with pid 1168094 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1168094 00:35:59.127 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1168094 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.388 13:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.303 13:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:01.303 00:36:01.303 real 0m13.137s 00:36:01.303 user 0m16.048s 00:36:01.303 sys 0m7.675s 00:36:01.303 13:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.303 13:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:01.303 ************************************ 00:36:01.303 END TEST nvmf_bdev_io_wait 00:36:01.303 ************************************ 00:36:01.303 13:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:01.303 13:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:01.303 13:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.303 13:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:01.565 ************************************ 00:36:01.565 START TEST nvmf_queue_depth 00:36:01.565 ************************************ 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:01.565 * Looking for test storage... 00:36:01.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:01.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.565 --rc genhtml_branch_coverage=1 00:36:01.565 --rc genhtml_function_coverage=1 00:36:01.565 --rc genhtml_legend=1 00:36:01.565 --rc geninfo_all_blocks=1 00:36:01.565 --rc geninfo_unexecuted_blocks=1 00:36:01.565 00:36:01.565 ' 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:01.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.565 --rc genhtml_branch_coverage=1 00:36:01.565 --rc genhtml_function_coverage=1 00:36:01.565 --rc genhtml_legend=1 00:36:01.565 --rc geninfo_all_blocks=1 00:36:01.565 --rc geninfo_unexecuted_blocks=1 00:36:01.565 00:36:01.565 ' 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:01.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.565 --rc genhtml_branch_coverage=1 00:36:01.565 --rc genhtml_function_coverage=1 00:36:01.565 --rc genhtml_legend=1 00:36:01.565 --rc geninfo_all_blocks=1 00:36:01.565 --rc geninfo_unexecuted_blocks=1 00:36:01.565 00:36:01.565 ' 00:36:01.565 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:01.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.566 --rc genhtml_branch_coverage=1 00:36:01.566 --rc genhtml_function_coverage=1 00:36:01.566 --rc genhtml_legend=1 00:36:01.566 --rc geninfo_all_blocks=1 00:36:01.566 --rc geninfo_unexecuted_blocks=1 00:36:01.566 00:36:01.566 ' 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.566 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:36:01.828 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:09.973 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:09.973 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:09.973 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:09.973 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:09.973 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:09.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:09.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:36:09.974 00:36:09.974 --- 10.0.0.2 ping statistics --- 00:36:09.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.974 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:09.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:09.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:36:09.974 00:36:09.974 --- 10.0.0.1 ping statistics --- 00:36:09.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.974 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1172849 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1172849 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1172849 ']' 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:09.974 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:09.974 [2024-11-29 13:20:11.852277] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:09.974 [2024-11-29 13:20:11.853403] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:36:09.974 [2024-11-29 13:20:11.853457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:09.974 [2024-11-29 13:20:11.956365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.974 [2024-11-29 13:20:12.006151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:09.974 [2024-11-29 13:20:12.006210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:09.974 [2024-11-29 13:20:12.006219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:09.974 [2024-11-29 13:20:12.006227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:09.974 [2024-11-29 13:20:12.006234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:09.974 [2024-11-29 13:20:12.006980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.974 [2024-11-29 13:20:12.084847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:09.974 [2024-11-29 13:20:12.085144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:10.235 [2024-11-29 13:20:12.727819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.235 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:10.236 Malloc0 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:10.236 [2024-11-29 13:20:12.807992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1172970 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1172970 /var/tmp/bdevperf.sock 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1172970 ']' 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:10.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:10.236 13:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:10.236 [2024-11-29 13:20:12.865417] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:36:10.236 [2024-11-29 13:20:12.865481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172970 ] 00:36:10.497 [2024-11-29 13:20:12.957266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.497 [2024-11-29 13:20:13.010797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.069 13:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:11.069 13:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:11.069 13:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:11.069 13:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.069 13:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:11.330 NVMe0n1 00:36:11.330 13:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.330 13:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:11.591 Running I/O for 10 seconds... 00:36:13.480 8206.00 IOPS, 32.05 MiB/s [2024-11-29T12:20:17.103Z] 8706.50 IOPS, 34.01 MiB/s [2024-11-29T12:20:18.047Z] 8927.33 IOPS, 34.87 MiB/s [2024-11-29T12:20:19.432Z] 9937.00 IOPS, 38.82 MiB/s [2024-11-29T12:20:20.374Z] 10634.00 IOPS, 41.54 MiB/s [2024-11-29T12:20:21.316Z] 11093.00 IOPS, 43.33 MiB/s [2024-11-29T12:20:22.261Z] 11424.71 IOPS, 44.63 MiB/s [2024-11-29T12:20:23.205Z] 11717.62 IOPS, 45.77 MiB/s [2024-11-29T12:20:24.147Z] 11913.56 IOPS, 46.54 MiB/s [2024-11-29T12:20:24.147Z] 12075.10 IOPS, 47.17 MiB/s 00:36:21.467 Latency(us) 00:36:21.467 [2024-11-29T12:20:24.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.467 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:36:21.467 Verification LBA range: start 0x0 length 0x4000 00:36:21.467 NVMe0n1 : 10.06 12108.22 47.30 0.00 0.00 84286.43 22828.37 72526.51 00:36:21.467 [2024-11-29T12:20:24.147Z] =================================================================================================================== 00:36:21.467 [2024-11-29T12:20:24.147Z] Total : 12108.22 47.30 0.00 0.00 84286.43 22828.37 72526.51 00:36:21.467 { 00:36:21.467 "results": [ 00:36:21.467 { 00:36:21.467 "job": "NVMe0n1", 00:36:21.467 "core_mask": "0x1", 00:36:21.467 "workload": "verify", 00:36:21.467 "status": "finished", 00:36:21.467 "verify_range": { 00:36:21.467 "start": 0, 00:36:21.467 "length": 16384 00:36:21.467 }, 00:36:21.467 "queue_depth": 1024, 00:36:21.467 "io_size": 4096, 00:36:21.467 "runtime": 10.057214, 00:36:21.467 "iops": 12108.224007165403, 00:36:21.467 "mibps": 47.297750027989856, 00:36:21.467 "io_failed": 0, 00:36:21.467 "io_timeout": 0, 00:36:21.467 "avg_latency_us": 84286.43441450763, 00:36:21.467 "min_latency_us": 22828.373333333333, 00:36:21.467 "max_latency_us": 72526.50666666667 00:36:21.467 } 00:36:21.467 ], 00:36:21.467 "core_count": 1 00:36:21.467 } 00:36:21.467 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1172970 00:36:21.467 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1172970 ']' 00:36:21.467 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1172970 00:36:21.467 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:36:21.467 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.467 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1172970 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1172970' 00:36:21.728 killing process with pid 1172970 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1172970 00:36:21.728 Received shutdown signal, test time was about 10.000000 seconds 00:36:21.728 00:36:21.728 Latency(us) 00:36:21.728 [2024-11-29T12:20:24.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.728 [2024-11-29T12:20:24.408Z] =================================================================================================================== 00:36:21.728 [2024-11-29T12:20:24.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1172970 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:21.728 rmmod nvme_tcp 00:36:21.728 rmmod nvme_fabrics 00:36:21.728 rmmod nvme_keyring 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1172849 ']' 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1172849 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1172849 ']' 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1172849 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.728 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1172849 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1172849' 00:36:21.989 killing process with pid 1172849 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1172849 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1172849 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:21.989 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:36:21.990 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:21.990 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:21.990 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.990 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:21.990 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.538 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.538 00:36:24.538 real 0m22.610s 00:36:24.538 user 0m24.889s 00:36:24.538 sys 0m7.487s 00:36:24.538 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.538 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:24.538 ************************************ 00:36:24.538 END TEST nvmf_queue_depth 00:36:24.538 ************************************ 00:36:24.538 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:24.538 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:24.538 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.538 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:24.538 ************************************ 00:36:24.538 START TEST nvmf_target_multipath 00:36:24.538 ************************************ 00:36:24.538 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:24.538 * Looking for test storage... 00:36:24.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:24.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.539 --rc genhtml_branch_coverage=1 00:36:24.539 --rc genhtml_function_coverage=1 00:36:24.539 --rc genhtml_legend=1 00:36:24.539 --rc geninfo_all_blocks=1 00:36:24.539 --rc geninfo_unexecuted_blocks=1 00:36:24.539 00:36:24.539 ' 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:24.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.539 --rc genhtml_branch_coverage=1 00:36:24.539 --rc genhtml_function_coverage=1 00:36:24.539 --rc genhtml_legend=1 00:36:24.539 --rc geninfo_all_blocks=1 00:36:24.539 --rc geninfo_unexecuted_blocks=1 00:36:24.539 00:36:24.539 ' 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:24.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.539 --rc genhtml_branch_coverage=1 00:36:24.539 --rc genhtml_function_coverage=1 00:36:24.539 --rc genhtml_legend=1 00:36:24.539 --rc geninfo_all_blocks=1 00:36:24.539 --rc geninfo_unexecuted_blocks=1 00:36:24.539 00:36:24.539 ' 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:24.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.539 --rc genhtml_branch_coverage=1 00:36:24.539 --rc genhtml_function_coverage=1 00:36:24.539 --rc genhtml_legend=1 00:36:24.539 --rc geninfo_all_blocks=1 00:36:24.539 --rc geninfo_unexecuted_blocks=1 00:36:24.539 00:36:24.539 ' 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.539 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:36:24.540 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:32.686 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:32.686 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:32.686 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:32.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:32.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:32.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:32.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:36:32.687 00:36:32.687 --- 10.0.0.2 ping statistics --- 00:36:32.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.687 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:32.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:32.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:36:32.687 00:36:32.687 --- 10.0.0.1 ping statistics --- 00:36:32.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.687 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:36:32.687 only one NIC for nvmf test 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:32.687 rmmod nvme_tcp 00:36:32.687 rmmod nvme_fabrics 00:36:32.687 rmmod nvme_keyring 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:32.687 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:34.071 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:34.072 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.072 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:34.072 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.072 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:34.072 00:36:34.072 real 0m10.014s 00:36:34.072 user 0m2.237s 00:36:34.072 sys 0m5.727s 00:36:34.072 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.072 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:34.072 ************************************ 00:36:34.072 END TEST nvmf_target_multipath 00:36:34.072 ************************************ 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:34.333 ************************************ 00:36:34.333 START TEST nvmf_zcopy 00:36:34.333 ************************************ 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:34.333 * Looking for test storage... 00:36:34.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:34.333 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:36:34.334 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:36:34.334 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:34.334 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:36:34.334 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:36:34.334 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:34.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.334 --rc genhtml_branch_coverage=1 00:36:34.334 --rc genhtml_function_coverage=1 00:36:34.334 --rc genhtml_legend=1 00:36:34.334 --rc geninfo_all_blocks=1 00:36:34.334 --rc geninfo_unexecuted_blocks=1 00:36:34.334 00:36:34.334 ' 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:34.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.334 --rc genhtml_branch_coverage=1 00:36:34.334 --rc genhtml_function_coverage=1 00:36:34.334 --rc genhtml_legend=1 00:36:34.334 --rc geninfo_all_blocks=1 00:36:34.334 --rc geninfo_unexecuted_blocks=1 00:36:34.334 00:36:34.334 ' 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:34.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.334 --rc genhtml_branch_coverage=1 00:36:34.334 --rc genhtml_function_coverage=1 00:36:34.334 --rc genhtml_legend=1 00:36:34.334 --rc geninfo_all_blocks=1 00:36:34.334 --rc geninfo_unexecuted_blocks=1 00:36:34.334 00:36:34.334 ' 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:34.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.334 --rc genhtml_branch_coverage=1 00:36:34.334 --rc genhtml_function_coverage=1 00:36:34.334 --rc genhtml_legend=1 00:36:34.334 --rc geninfo_all_blocks=1 00:36:34.334 --rc geninfo_unexecuted_blocks=1 00:36:34.334 00:36:34.334 ' 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.334 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:36:34.595 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.595 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.595 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.595 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.595 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.595 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.595 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:36:34.596 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:42.740 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:42.740 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:42.740 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:42.740 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:42.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:42.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:36:42.740 00:36:42.740 --- 10.0.0.2 ping statistics --- 00:36:42.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.740 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:42.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:42.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:36:42.740 00:36:42.740 --- 10.0.0.1 ping statistics --- 00:36:42.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.740 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1183530 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1183530 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1183530 ']' 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:42.740 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:42.740 [2024-11-29 13:20:44.636627] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:42.740 [2024-11-29 13:20:44.637749] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:36:42.740 [2024-11-29 13:20:44.637798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:42.740 [2024-11-29 13:20:44.737802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:42.740 [2024-11-29 13:20:44.788537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:42.740 [2024-11-29 13:20:44.788589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:42.740 [2024-11-29 13:20:44.788598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:42.740 [2024-11-29 13:20:44.788605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:42.740 [2024-11-29 13:20:44.788611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:42.740 [2024-11-29 13:20:44.789417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.740 [2024-11-29 13:20:44.868450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:42.740 [2024-11-29 13:20:44.868742] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.003 [2024-11-29 13:20:45.522303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.003 [2024-11-29 13:20:45.550612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:43.003 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.004 malloc0 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:43.004 { 00:36:43.004 "params": { 00:36:43.004 "name": "Nvme$subsystem", 00:36:43.004 "trtype": "$TEST_TRANSPORT", 00:36:43.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:43.004 "adrfam": "ipv4", 00:36:43.004 "trsvcid": "$NVMF_PORT", 00:36:43.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:43.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:43.004 "hdgst": ${hdgst:-false}, 00:36:43.004 "ddgst": ${ddgst:-false} 00:36:43.004 }, 00:36:43.004 "method": "bdev_nvme_attach_controller" 00:36:43.004 } 00:36:43.004 EOF 00:36:43.004 )") 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:43.004 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:43.004 "params": { 00:36:43.004 "name": "Nvme1", 00:36:43.004 "trtype": "tcp", 00:36:43.004 "traddr": "10.0.0.2", 00:36:43.004 "adrfam": "ipv4", 00:36:43.004 "trsvcid": "4420", 00:36:43.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:43.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:43.004 "hdgst": false, 00:36:43.004 "ddgst": false 00:36:43.004 }, 00:36:43.004 "method": "bdev_nvme_attach_controller" 00:36:43.004 }' 00:36:43.004 [2024-11-29 13:20:45.658375] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:36:43.004 [2024-11-29 13:20:45.658443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183591 ] 00:36:43.267 [2024-11-29 13:20:45.748947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.267 [2024-11-29 13:20:45.802146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.529 Running I/O for 10 seconds... 00:36:45.419 6396.00 IOPS, 49.97 MiB/s [2024-11-29T12:20:49.486Z] 6441.50 IOPS, 50.32 MiB/s [2024-11-29T12:20:50.430Z] 6463.33 IOPS, 50.49 MiB/s [2024-11-29T12:20:51.375Z] 6679.50 IOPS, 52.18 MiB/s [2024-11-29T12:20:52.319Z] 7269.40 IOPS, 56.79 MiB/s [2024-11-29T12:20:53.350Z] 7671.33 IOPS, 59.93 MiB/s [2024-11-29T12:20:54.396Z] 7950.57 IOPS, 62.11 MiB/s [2024-11-29T12:20:55.337Z] 8162.62 IOPS, 63.77 MiB/s [2024-11-29T12:20:56.276Z] 8324.11 IOPS, 65.03 MiB/s [2024-11-29T12:20:56.276Z] 8455.80 IOPS, 66.06 MiB/s 00:36:53.596 Latency(us) 00:36:53.596 [2024-11-29T12:20:56.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.596 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:53.596 Verification LBA range: start 0x0 length 0x1000 00:36:53.596 Nvme1n1 : 10.01 8460.42 66.10 0.00 0.00 15082.28 1460.91 27197.44 00:36:53.596 [2024-11-29T12:20:56.276Z] =================================================================================================================== 00:36:53.596 [2024-11-29T12:20:56.276Z] Total : 8460.42 66.10 0.00 0.00 15082.28 1460.91 27197.44 00:36:53.596 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1185572 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.597 { 00:36:53.597 "params": { 00:36:53.597 "name": "Nvme$subsystem", 00:36:53.597 "trtype": "$TEST_TRANSPORT", 00:36:53.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.597 "adrfam": "ipv4", 00:36:53.597 "trsvcid": "$NVMF_PORT", 00:36:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.597 "hdgst": ${hdgst:-false}, 00:36:53.597 "ddgst": ${ddgst:-false} 00:36:53.597 }, 00:36:53.597 "method": "bdev_nvme_attach_controller" 00:36:53.597 } 00:36:53.597 EOF 00:36:53.597 )") 00:36:53.597 [2024-11-29 13:20:56.229832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.597 [2024-11-29 13:20:56.229863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:53.597 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:53.597 "params": { 00:36:53.597 "name": "Nvme1", 00:36:53.597 "trtype": "tcp", 00:36:53.597 "traddr": "10.0.0.2", 00:36:53.597 "adrfam": "ipv4", 00:36:53.597 "trsvcid": "4420", 00:36:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.597 "hdgst": false, 00:36:53.597 "ddgst": false 00:36:53.597 }, 00:36:53.597 "method": "bdev_nvme_attach_controller" 00:36:53.597 }' 00:36:53.597 [2024-11-29 13:20:56.241794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.597 [2024-11-29 13:20:56.241802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.597 [2024-11-29 13:20:56.253791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.597 [2024-11-29 13:20:56.253798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.597 [2024-11-29 13:20:56.265790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.597 [2024-11-29 13:20:56.265797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.277791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.277799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.286661] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:36:53.858 [2024-11-29 13:20:56.286709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185572 ] 00:36:53.858 [2024-11-29 13:20:56.289790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.289797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.301791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.301797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.313790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.313798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.325790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.325797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.337790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.337799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.349791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.349799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.361793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.361802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.368969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.858 [2024-11-29 13:20:56.373792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.373799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.385790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.385800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.397791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.397802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.398381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.858 [2024-11-29 13:20:56.409792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.409801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.421793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.421807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.433792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.433803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.445793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.445803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.457790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.457798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.469802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.469819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.481795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.481807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.493794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.493804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.505791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.505799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.517791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.517798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:53.858 [2024-11-29 13:20:56.529791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:53.858 [2024-11-29 13:20:56.529798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.119 [2024-11-29 13:20:56.541793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.119 [2024-11-29 13:20:56.541803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.119 [2024-11-29 13:20:56.553793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.119 [2024-11-29 13:20:56.553804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.119 [2024-11-29 13:20:56.565799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.119 [2024-11-29 13:20:56.565814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.119 Running I/O for 5 seconds... 00:36:54.119 [2024-11-29 13:20:56.581258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.119 [2024-11-29 13:20:56.581274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.119 [2024-11-29 13:20:56.594737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.119 [2024-11-29 13:20:56.594754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.119 [2024-11-29 13:20:56.608951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.119 [2024-11-29 13:20:56.608971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.622072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.622087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.636811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.636826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.649874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.649889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.662861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.662875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.677625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.677641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.690972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.690987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.704993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.705008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.718280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.718294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.732412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.732427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.745937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.745952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.758904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.758919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.773592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.773607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.120 [2024-11-29 13:20:56.786518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.120 [2024-11-29 13:20:56.786533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.800829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.800844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.814280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.814294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.829585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.829600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.842701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.842716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.857303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.857318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.870705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.870723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.885095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.885110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.898154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.898172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.913109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.913124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.926215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.926229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.940911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.940926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.953809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.953824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.967305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.967320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.981065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.381 [2024-11-29 13:20:56.981080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.381 [2024-11-29 13:20:56.994478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.382 [2024-11-29 13:20:56.994492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.382 [2024-11-29 13:20:57.009114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.382 [2024-11-29 13:20:57.009129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.382 [2024-11-29 13:20:57.022228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.382 [2024-11-29 13:20:57.022243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.382 [2024-11-29 13:20:57.037014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.382 [2024-11-29 13:20:57.037030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.382 [2024-11-29 13:20:57.050293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.382 [2024-11-29 13:20:57.050308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.065300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.065316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.078459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.078473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.093126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.093141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.106067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.106081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.120588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.120603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.133862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.133876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.146761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.146776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.160733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.160747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.173793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.173807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.186735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.186749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.201184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.201198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.214483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.214497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.229256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.229271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.242133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.242147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.256780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.256795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.269952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.269967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.283277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.283290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.297209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.297223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.643 [2024-11-29 13:20:57.309806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.643 [2024-11-29 13:20:57.309820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.322981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.322995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.336853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.336868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.350327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.350341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.364785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.364800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.377724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.377739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.390616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.390629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.404699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.404713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.417775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.417790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.430598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.430612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.445012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.445026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.458416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.458430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.472990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.473004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.486125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.486139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.501100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.501115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.513985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.513999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.526724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.526738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.540884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.540899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.553807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.553822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 [2024-11-29 13:20:57.567192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.567206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:54.904 18928.00 IOPS, 147.88 MiB/s [2024-11-29T12:20:57.584Z] [2024-11-29 13:20:57.580874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:54.904 [2024-11-29 13:20:57.580888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.593908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.593922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.607229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.607243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.621074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.621089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.633965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.633979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.646852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.646866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.660746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.660760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.673775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.673789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.686757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.686771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.700640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.700655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.713586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.713600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.726891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.726905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.740967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.740981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.754067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.754081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.769711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.769726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.782640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.782654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.797173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.797188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.809961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.809975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.823408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.823422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.165 [2024-11-29 13:20:57.836872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.165 [2024-11-29 13:20:57.836886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.849403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.849418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.862540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.862554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.876779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.876794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.889836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.889854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.902775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.902789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.916885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.916899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.929645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.929660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.942419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.942433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.956954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.956969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.970028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.970042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.982589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.982602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:57.996942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:57.996957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:58.009982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:58.009996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:58.022815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:58.022829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:58.037224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:58.037238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:58.050040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:58.050055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:58.062835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:58.062849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:58.076799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:58.076814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:58.089908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:58.089922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.425 [2024-11-29 13:20:58.102467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.425 [2024-11-29 13:20:58.102482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.685 [2024-11-29 13:20:58.116978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.685 [2024-11-29 13:20:58.116993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.685 [2024-11-29 13:20:58.129889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.685 [2024-11-29 13:20:58.129904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.685 [2024-11-29 13:20:58.143139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.685 [2024-11-29 13:20:58.143162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.685 [2024-11-29 13:20:58.157154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.685 [2024-11-29 13:20:58.157173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.685 [2024-11-29 13:20:58.170036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.685 [2024-11-29 13:20:58.170050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.685 [2024-11-29 13:20:58.182359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.685 [2024-11-29 13:20:58.182373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.685 [2024-11-29 13:20:58.196356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.685 [2024-11-29 13:20:58.196371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.685 [2024-11-29 13:20:58.209414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.209428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.222151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.222171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.237008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.237023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.250441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.250455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.264968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.264982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.277946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.277961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.290424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.290439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.304628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.304643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.317628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.317642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.330693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.330708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.345026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.345041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.686 [2024-11-29 13:20:58.358424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.686 [2024-11-29 13:20:58.358438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.946 [2024-11-29 13:20:58.372987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.946 [2024-11-29 13:20:58.373002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.946 [2024-11-29 13:20:58.386086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.946 [2024-11-29 13:20:58.386100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.946 [2024-11-29 13:20:58.400878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.946 [2024-11-29 13:20:58.400896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.946 [2024-11-29 13:20:58.413711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.946 [2024-11-29 13:20:58.413725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.426309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.426322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.440749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.440764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.453805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.453819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.466752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.466766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.480947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.480962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.494248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.494263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.508890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.508905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.521893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.521907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.534624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.534638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.549406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.549422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.562627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.562641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.576920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.576934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 19005.00 IOPS, 148.48 MiB/s [2024-11-29T12:20:58.627Z] [2024-11-29 13:20:58.589813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.589828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.602882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.602897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:55.947 [2024-11-29 13:20:58.616841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:55.947 [2024-11-29 13:20:58.616857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.629872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.629887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.642871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.642886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.656983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.656999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.670630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.670644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.685099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.685114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.698630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.698644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.713334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.713349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.726357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.726371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.740809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.740824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.754294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.754308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.768536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.768550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.781326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.781340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.794356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.794370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.808772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.808787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.821516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.821531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.834825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.834839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.848842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.848857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.861878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.861894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.207 [2024-11-29 13:20:58.874991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.207 [2024-11-29 13:20:58.875005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:58.888750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:58.888764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:58.901708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:58.901723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:58.914817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:58.914831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:58.928732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:58.928747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:58.941643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:58.941658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:58.954887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:58.954901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:58.969366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:58.969380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:58.982217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:58.982230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:58.997077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:58.997092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.010488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.010502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.024922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.024936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.037818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.037833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.050346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.050360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.064368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.064382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.077642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.077657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.091139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.091153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.105107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.105122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.118490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.118504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.467 [2024-11-29 13:20:59.133060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.467 [2024-11-29 13:20:59.133075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.146156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.146174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.161024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.161038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.174122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.174136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.188773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.188787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.201785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.201799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.214338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.214352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.228865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.228880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.242111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.242125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.256578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.256593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.269684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.269698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.282471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.282485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.296913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.296929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.310204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.310218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.324965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.324980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.338126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.338140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.353093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.353109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.366172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.366186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.380895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.380911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.728 [2024-11-29 13:20:59.393958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.728 [2024-11-29 13:20:59.393972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.406888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.406902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.420970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.420989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.434100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.434114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.448910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.448924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.461648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.461662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.474963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.474977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.488977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.488991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.501931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.501946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.514772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.514786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.529738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.529753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.542727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.542742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.557090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.557105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.569942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.569956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.582793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.582807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 19002.67 IOPS, 148.46 MiB/s [2024-11-29T12:20:59.668Z] [2024-11-29 13:20:59.597398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.597413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.610483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.610497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.625302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.625317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.638330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.638344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.653096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.653111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.988 [2024-11-29 13:20:59.666061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:56.988 [2024-11-29 13:20:59.666074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.248 [2024-11-29 13:20:59.680415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.248 [2024-11-29 13:20:59.680434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.248 [2024-11-29 13:20:59.693707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.248 [2024-11-29 13:20:59.693721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.248 [2024-11-29 13:20:59.706240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.248 [2024-11-29 13:20:59.706254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.248 [2024-11-29 13:20:59.720781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.248 [2024-11-29 13:20:59.720796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.248 [2024-11-29 13:20:59.733836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.733850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.747041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.747055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.761284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.761298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.774452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.774466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.788906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.788920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.802130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.802144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.817547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.817561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.830330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.830344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.844639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.844653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.857684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.857699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.870822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.870836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.885412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.885427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.898649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.898664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.912753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.912769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.249 [2024-11-29 13:20:59.925807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.249 [2024-11-29 13:20:59.925822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:20:59.938445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:20:59.938463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:20:59.952914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:20:59.952928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:20:59.965963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:20:59.965978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:20:59.978950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:20:59.978965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:20:59.993573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:20:59.993588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:21:00.007531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:21:00.007547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:21:00.021361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:21:00.021376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:21:00.034743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:21:00.034758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:21:00.048705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:21:00.048720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.538 [2024-11-29 13:21:00.061765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.538 [2024-11-29 13:21:00.061780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.074649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.074663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.088971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.088987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.102260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.102275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.117537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.117553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.130759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.130774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.145039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.145055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.158097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.158111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.172990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.173005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.186153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.186171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.200957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.200977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.539 [2024-11-29 13:21:00.213879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.539 [2024-11-29 13:21:00.213894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.226984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.227000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.240941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.240957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.254279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.254293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.269128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.269143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.282374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.282389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.296908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.296924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.309945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.309961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.322960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.322974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.337281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.337296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.350243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.350258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.365071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.365085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.378302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.378316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.393409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.393424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.406754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.406769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.421422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.421436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.434422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.434436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.449209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.449225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.462633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.462648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:57.800 [2024-11-29 13:21:00.477045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:57.800 [2024-11-29 13:21:00.477061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.490441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.490456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.505344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.505358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.518466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.518481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.533048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.533063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.546201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.546216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.560972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.560987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.574143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.574163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 18998.50 IOPS, 148.43 MiB/s [2024-11-29T12:21:00.741Z] [2024-11-29 13:21:00.588538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.588553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.601320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.601334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.614165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.614179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.628533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.628548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.641537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.641553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.654863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.654877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.669043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.669058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.682169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.682184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.696718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.696732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.709661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.709675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.723126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.723140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.061 [2024-11-29 13:21:00.737014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.061 [2024-11-29 13:21:00.737028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.750266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.750280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.765561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.765575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.778482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.778496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.793153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.793171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.806258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.806271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.821534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.821549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.834696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.834710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.849195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.849210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.861876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.861890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.874489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.874502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.889083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.889098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.902287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.902301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.916657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.916671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.929580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.929595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.942236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.942250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.956957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.956971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.970249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.970266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.985309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.985323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.321 [2024-11-29 13:21:00.998258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.321 [2024-11-29 13:21:00.998272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.013124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.013139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.026281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.026294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.041439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.041454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.054800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.054814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.069600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.069614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.082768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.082782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.097031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.097045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.110192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.110206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.124520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.124534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.137590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.137604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.150556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.150571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.165669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.165683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.178881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.178895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.192883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.192897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.205960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.205975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.218950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.218964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.233230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.233251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.581 [2024-11-29 13:21:01.246686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.581 [2024-11-29 13:21:01.246700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.260932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.260947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.273950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.273965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.286770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.286784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.301037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.301051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.314112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.314125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.328824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.328838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.342176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.342190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.356812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.356827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.369867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.369882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.382960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.382975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.397391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.397406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.410621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.410636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.425152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.425172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.438357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.438372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.452699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.452715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.466038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.466053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.479067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.479081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.493073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.493091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.506185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.506200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:58.843 [2024-11-29 13:21:01.520642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:58.843 [2024-11-29 13:21:01.520656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.534074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.534089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.549401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.549415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.562420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.562434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.577115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.577130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 18997.20 IOPS, 148.42 MiB/s [2024-11-29T12:21:01.840Z] [2024-11-29 13:21:01.590180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.590193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 00:36:59.160 Latency(us) 00:36:59.160 [2024-11-29T12:21:01.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.160 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:59.160 Nvme1n1 : 5.01 19000.22 148.44 0.00 0.00 6731.31 2676.05 11632.64 00:36:59.160 [2024-11-29T12:21:01.840Z] =================================================================================================================== 00:36:59.160 [2024-11-29T12:21:01.840Z] Total : 19000.22 148.44 0.00 0.00 6731.31 2676.05 11632.64 00:36:59.160 [2024-11-29 13:21:01.601796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.601810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.613799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.613813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.625797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.625811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.637798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.637811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.649794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.649804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.661792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.661802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.673793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.673803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.685793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.685802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 [2024-11-29 13:21:01.697790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:59.160 [2024-11-29 13:21:01.697799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:59.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1185572) - No such process 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1185572 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:59.160 delay0 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.160 13:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:59.421 [2024-11-29 13:21:01.864652] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:06.013 Initializing NVMe Controllers 00:37:06.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:06.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:06.013 Initialization complete. Launching workers. 00:37:06.013 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 953 00:37:06.013 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1235, failed to submit 38 00:37:06.013 success 1084, unsuccessful 151, failed 0 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:06.013 rmmod nvme_tcp 00:37:06.013 rmmod nvme_fabrics 00:37:06.013 rmmod nvme_keyring 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1183530 ']' 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1183530 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1183530 ']' 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1183530 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1183530 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1183530' 00:37:06.013 killing process with pid 1183530 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1183530 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1183530 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:06.013 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.929 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:07.929 00:37:07.929 real 0m33.711s 00:37:07.929 user 0m42.641s 00:37:07.929 sys 0m12.397s 00:37:07.929 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:07.929 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:07.929 ************************************ 00:37:07.929 END TEST nvmf_zcopy 00:37:07.929 ************************************ 00:37:07.929 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:07.929 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:07.929 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:07.929 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:07.929 ************************************ 00:37:07.929 START TEST nvmf_nmic 00:37:07.929 ************************************ 00:37:07.929 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:08.191 * Looking for test storage... 00:37:08.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:37:08.191 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:08.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.192 --rc genhtml_branch_coverage=1 00:37:08.192 --rc genhtml_function_coverage=1 00:37:08.192 --rc genhtml_legend=1 00:37:08.192 --rc geninfo_all_blocks=1 00:37:08.192 --rc geninfo_unexecuted_blocks=1 00:37:08.192 00:37:08.192 ' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:08.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.192 --rc genhtml_branch_coverage=1 00:37:08.192 --rc genhtml_function_coverage=1 00:37:08.192 --rc genhtml_legend=1 00:37:08.192 --rc geninfo_all_blocks=1 00:37:08.192 --rc geninfo_unexecuted_blocks=1 00:37:08.192 00:37:08.192 ' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:08.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.192 --rc genhtml_branch_coverage=1 00:37:08.192 --rc genhtml_function_coverage=1 00:37:08.192 --rc genhtml_legend=1 00:37:08.192 --rc geninfo_all_blocks=1 00:37:08.192 --rc geninfo_unexecuted_blocks=1 00:37:08.192 00:37:08.192 ' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:08.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.192 --rc genhtml_branch_coverage=1 00:37:08.192 --rc genhtml_function_coverage=1 00:37:08.192 --rc genhtml_legend=1 00:37:08.192 --rc geninfo_all_blocks=1 00:37:08.192 --rc geninfo_unexecuted_blocks=1 00:37:08.192 00:37:08.192 ' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:37:08.192 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:16.332 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:16.332 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:16.332 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:16.333 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:16.333 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:16.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:16.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:37:16.333 00:37:16.333 --- 10.0.0.2 ping statistics --- 00:37:16.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:16.333 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:16.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:16.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:37:16.333 00:37:16.333 --- 10.0.0.1 ping statistics --- 00:37:16.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:16.333 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1192499 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1192499 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1192499 ']' 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.333 13:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.333 [2024-11-29 13:21:18.019220] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:16.333 [2024-11-29 13:21:18.020236] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:37:16.333 [2024-11-29 13:21:18.020280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.333 [2024-11-29 13:21:18.111350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:16.333 [2024-11-29 13:21:18.148930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.333 [2024-11-29 13:21:18.148962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.333 [2024-11-29 13:21:18.148968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:16.333 [2024-11-29 13:21:18.148973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:16.333 [2024-11-29 13:21:18.148977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.333 [2024-11-29 13:21:18.150579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:16.333 [2024-11-29 13:21:18.150737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:16.333 [2024-11-29 13:21:18.150877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.333 [2024-11-29 13:21:18.150880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:16.333 [2024-11-29 13:21:18.205729] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:16.333 [2024-11-29 13:21:18.206653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:16.333 [2024-11-29 13:21:18.207518] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:16.333 [2024-11-29 13:21:18.207895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:16.333 [2024-11-29 13:21:18.207942] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.333 [2024-11-29 13:21:18.883411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.333 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.334 Malloc0 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.334 [2024-11-29 13:21:18.971642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:37:16.334 test case1: single bdev can't be used in multiple subsystems 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:37:16.334 13:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.334 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.334 [2024-11-29 13:21:19.007325] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:37:16.334 [2024-11-29 13:21:19.007353] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:37:16.334 [2024-11-29 13:21:19.007363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:16.594 request: 00:37:16.594 { 00:37:16.594 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:37:16.594 "namespace": { 00:37:16.594 "bdev_name": "Malloc0", 00:37:16.594 "no_auto_visible": false, 00:37:16.594 "hide_metadata": false 00:37:16.594 }, 00:37:16.594 "method": "nvmf_subsystem_add_ns", 00:37:16.594 "req_id": 1 00:37:16.594 } 00:37:16.594 Got JSON-RPC error response 00:37:16.594 response: 00:37:16.594 { 00:37:16.594 "code": -32602, 00:37:16.595 "message": "Invalid parameters" 00:37:16.595 } 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:37:16.595 Adding namespace failed - expected result. 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:37:16.595 test case2: host connect to nvmf target in multiple paths 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:16.595 [2024-11-29 13:21:19.019457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.595 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:16.855 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:37:17.425 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:37:17.425 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:37:17.425 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:17.425 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:17.425 13:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:37:19.342 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:19.342 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:19.342 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:19.342 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:19.342 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:19.342 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:37:19.342 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:19.342 [global] 00:37:19.342 thread=1 00:37:19.342 invalidate=1 00:37:19.342 rw=write 00:37:19.342 time_based=1 00:37:19.342 runtime=1 00:37:19.342 ioengine=libaio 00:37:19.342 direct=1 00:37:19.342 bs=4096 00:37:19.342 iodepth=1 00:37:19.342 norandommap=0 00:37:19.342 numjobs=1 00:37:19.342 00:37:19.342 verify_dump=1 00:37:19.342 verify_backlog=512 00:37:19.342 verify_state_save=0 00:37:19.342 do_verify=1 00:37:19.342 verify=crc32c-intel 00:37:19.342 [job0] 00:37:19.342 filename=/dev/nvme0n1 00:37:19.342 Could not set queue depth (nvme0n1) 00:37:19.910 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:19.910 fio-3.35 00:37:19.910 Starting 1 thread 00:37:20.850 00:37:20.850 job0: (groupid=0, jobs=1): err= 0: pid=1193664: Fri Nov 29 13:21:23 2024 00:37:20.850 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:37:20.850 slat (nsec): min=24783, max=25253, avg=24985.72, stdev=119.62 00:37:20.850 clat (usec): min=1089, max=42009, avg=39687.75, stdev=9633.03 00:37:20.850 lat (usec): min=1114, max=42034, avg=39712.74, stdev=9633.02 00:37:20.850 clat percentiles (usec): 00:37:20.850 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41681], 20.00th=[41681], 00:37:20.850 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:20.850 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:20.850 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:20.850 | 99.99th=[42206] 00:37:20.850 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:37:20.850 slat (nsec): min=9542, max=61449, avg=28201.11, stdev=9323.16 00:37:20.850 clat (usec): min=303, max=810, avg=599.86, stdev=92.47 00:37:20.850 lat (usec): min=313, max=828, avg=628.06, stdev=96.04 00:37:20.850 clat percentiles (usec): 00:37:20.850 | 1.00th=[ 347], 5.00th=[ 420], 10.00th=[ 478], 20.00th=[ 510], 00:37:20.850 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 635], 00:37:20.850 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 725], 00:37:20.850 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 807], 99.95th=[ 807], 00:37:20.850 | 99.99th=[ 807] 00:37:20.850 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:37:20.850 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:20.850 lat (usec) : 500=16.04%, 750=78.11%, 1000=2.45% 00:37:20.850 lat (msec) : 2=0.19%, 50=3.21% 00:37:20.850 cpu : usr=1.06%, sys=0.96%, ctx=530, majf=0, minf=1 00:37:20.850 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:20.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.850 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:20.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:20.850 00:37:20.850 Run status group 0 (all jobs): 00:37:20.850 READ: bw=69.2KiB/s (70.9kB/s), 69.2KiB/s-69.2KiB/s (70.9kB/s-70.9kB/s), io=72.0KiB (73.7kB), run=1040-1040msec 00:37:20.850 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:37:20.850 00:37:20.850 Disk stats (read/write): 00:37:20.850 nvme0n1: ios=64/512, merge=0/0, ticks=933/302, in_queue=1235, util=97.49% 00:37:20.850 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:21.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:21.111 rmmod nvme_tcp 00:37:21.111 rmmod nvme_fabrics 00:37:21.111 rmmod nvme_keyring 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1192499 ']' 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1192499 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1192499 ']' 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1192499 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:21.111 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1192499 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1192499' 00:37:21.372 killing process with pid 1192499 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1192499 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1192499 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.372 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:23.915 00:37:23.915 real 0m15.449s 00:37:23.915 user 0m37.537s 00:37:23.915 sys 0m7.195s 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:23.915 ************************************ 00:37:23.915 END TEST nvmf_nmic 00:37:23.915 ************************************ 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:23.915 ************************************ 00:37:23.915 START TEST nvmf_fio_target 00:37:23.915 ************************************ 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:37:23.915 * Looking for test storage... 00:37:23.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.915 --rc genhtml_branch_coverage=1 00:37:23.915 --rc genhtml_function_coverage=1 00:37:23.915 --rc genhtml_legend=1 00:37:23.915 --rc geninfo_all_blocks=1 00:37:23.915 --rc geninfo_unexecuted_blocks=1 00:37:23.915 00:37:23.915 ' 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.915 --rc genhtml_branch_coverage=1 00:37:23.915 --rc genhtml_function_coverage=1 00:37:23.915 --rc genhtml_legend=1 00:37:23.915 --rc geninfo_all_blocks=1 00:37:23.915 --rc geninfo_unexecuted_blocks=1 00:37:23.915 00:37:23.915 ' 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.915 --rc genhtml_branch_coverage=1 00:37:23.915 --rc genhtml_function_coverage=1 00:37:23.915 --rc genhtml_legend=1 00:37:23.915 --rc geninfo_all_blocks=1 00:37:23.915 --rc geninfo_unexecuted_blocks=1 00:37:23.915 00:37:23.915 ' 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.915 --rc genhtml_branch_coverage=1 00:37:23.915 --rc genhtml_function_coverage=1 00:37:23.915 --rc genhtml_legend=1 00:37:23.915 --rc geninfo_all_blocks=1 00:37:23.915 --rc geninfo_unexecuted_blocks=1 00:37:23.915 00:37:23.915 ' 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:23.915 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:37:23.916 13:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:32.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:32.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:32.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:32.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:32.053 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:32.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:32.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.722 ms 00:37:32.054 00:37:32.054 --- 10.0.0.2 ping statistics --- 00:37:32.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.054 rtt min/avg/max/mdev = 0.722/0.722/0.722/0.000 ms 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:32.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:32.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:37:32.054 00:37:32.054 --- 10.0.0.1 ping statistics --- 00:37:32.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:32.054 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1198012 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1198012 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1198012 ']' 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:32.054 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:32.054 [2024-11-29 13:21:33.954400] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:32.054 [2024-11-29 13:21:33.955568] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:37:32.054 [2024-11-29 13:21:33.955622] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:32.054 [2024-11-29 13:21:34.054781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:32.054 [2024-11-29 13:21:34.107175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:32.054 [2024-11-29 13:21:34.107249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:32.054 [2024-11-29 13:21:34.107259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:32.054 [2024-11-29 13:21:34.107266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:32.054 [2024-11-29 13:21:34.107273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:32.054 [2024-11-29 13:21:34.109632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.054 [2024-11-29 13:21:34.109793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:32.054 [2024-11-29 13:21:34.109958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.054 [2024-11-29 13:21:34.109959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:32.054 [2024-11-29 13:21:34.188321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:32.054 [2024-11-29 13:21:34.189371] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:32.054 [2024-11-29 13:21:34.189509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:32.054 [2024-11-29 13:21:34.189905] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:32.054 [2024-11-29 13:21:34.189940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:32.316 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:32.316 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:37:32.316 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:32.316 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:32.316 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:32.316 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:32.316 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:32.576 [2024-11-29 13:21:34.998607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:32.576 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:32.836 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:37:32.836 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:32.836 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:37:32.836 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:33.097 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:37:33.097 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:33.357 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:37:33.357 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:37:33.618 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:33.618 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:37:33.618 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:33.879 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:37:33.879 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:34.140 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:37:34.140 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:37:34.140 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:34.400 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:34.400 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:34.660 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:34.660 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:34.660 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:34.919 [2024-11-29 13:21:37.454597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.919 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:37:35.178 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:37:35.178 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:35.770 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:37:35.770 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:37:35.770 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:35.770 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:37:35.770 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:37:35.770 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:37:37.677 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:37.677 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:37.677 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:37.677 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:37:37.677 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:37.677 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:37:37.677 13:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:37.677 [global] 00:37:37.677 thread=1 00:37:37.678 invalidate=1 00:37:37.678 rw=write 00:37:37.678 time_based=1 00:37:37.678 runtime=1 00:37:37.678 ioengine=libaio 00:37:37.678 direct=1 00:37:37.678 bs=4096 00:37:37.678 iodepth=1 00:37:37.678 norandommap=0 00:37:37.678 numjobs=1 00:37:37.678 00:37:37.678 verify_dump=1 00:37:37.678 verify_backlog=512 00:37:37.678 verify_state_save=0 00:37:37.678 do_verify=1 00:37:37.678 verify=crc32c-intel 00:37:37.678 [job0] 00:37:37.678 filename=/dev/nvme0n1 00:37:37.678 [job1] 00:37:37.678 filename=/dev/nvme0n2 00:37:37.678 [job2] 00:37:37.678 filename=/dev/nvme0n3 00:37:37.678 [job3] 00:37:37.678 filename=/dev/nvme0n4 00:37:37.678 Could not set queue depth (nvme0n1) 00:37:37.678 Could not set queue depth (nvme0n2) 00:37:37.678 Could not set queue depth (nvme0n3) 00:37:37.678 Could not set queue depth (nvme0n4) 00:37:38.257 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:38.257 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:38.257 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:38.257 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:38.257 fio-3.35 00:37:38.257 Starting 4 threads 00:37:39.639 00:37:39.639 job0: (groupid=0, jobs=1): err= 0: pid=1199571: Fri Nov 29 13:21:41 2024 00:37:39.639 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1023msec) 00:37:39.639 slat (nsec): min=26688, max=28017, avg=27351.35, stdev=410.62 00:37:39.639 clat (usec): min=1141, max=42255, avg=39220.01, stdev=9823.44 00:37:39.639 lat (usec): min=1168, max=42283, avg=39247.36, stdev=9823.48 00:37:39.639 clat percentiles (usec): 00:37:39.639 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41157], 00:37:39.639 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:37:39.639 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:39.639 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:39.639 | 99.99th=[42206] 00:37:39.639 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:37:39.639 slat (nsec): min=9582, max=56032, avg=26568.17, stdev=12399.70 00:37:39.639 clat (usec): min=148, max=1242, avg=662.71, stdev=226.45 00:37:39.639 lat (usec): min=159, max=1296, avg=689.27, stdev=233.83 00:37:39.639 clat percentiles (usec): 00:37:39.639 | 1.00th=[ 190], 5.00th=[ 314], 10.00th=[ 371], 20.00th=[ 437], 00:37:39.639 | 30.00th=[ 506], 40.00th=[ 578], 50.00th=[ 644], 60.00th=[ 750], 00:37:39.639 | 70.00th=[ 832], 80.00th=[ 898], 90.00th=[ 963], 95.00th=[ 996], 00:37:39.639 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1237], 99.95th=[ 1237], 00:37:39.639 | 99.99th=[ 1237] 00:37:39.639 bw ( KiB/s): min= 4096, max= 4096, per=46.20%, avg=4096.00, stdev= 0.00, samples=1 00:37:39.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:39.639 lat (usec) : 250=1.51%, 500=26.47%, 750=29.11%, 1000=34.97% 00:37:39.639 lat (msec) : 2=4.91%, 50=3.02% 00:37:39.639 cpu : usr=0.68%, sys=1.57%, ctx=533, majf=0, minf=1 00:37:39.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.639 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:39.639 job1: (groupid=0, jobs=1): err= 0: pid=1199589: Fri Nov 29 13:21:41 2024 00:37:39.639 read: IOPS=265, BW=1063KiB/s (1088kB/s)(1064KiB/1001msec) 00:37:39.639 slat (nsec): min=10169, max=45707, avg=27058.17, stdev=2507.75 00:37:39.639 clat (usec): min=798, max=42117, avg=2411.29, stdev=7398.17 00:37:39.639 lat (usec): min=826, max=42144, avg=2438.35, stdev=7398.11 00:37:39.639 clat percentiles (usec): 00:37:39.639 | 1.00th=[ 840], 5.00th=[ 898], 10.00th=[ 938], 20.00th=[ 979], 00:37:39.639 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:37:39.639 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:37:39.639 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:39.639 | 99.99th=[42206] 00:37:39.639 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:37:39.639 slat (nsec): min=9128, max=70574, avg=31525.26, stdev=8827.67 00:37:39.639 clat (usec): min=281, max=953, avg=642.98, stdev=121.89 00:37:39.639 lat (usec): min=293, max=987, avg=674.51, stdev=125.84 00:37:39.639 clat percentiles (usec): 00:37:39.639 | 1.00th=[ 318], 5.00th=[ 420], 10.00th=[ 478], 20.00th=[ 545], 00:37:39.639 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:37:39.639 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 832], 00:37:39.639 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:37:39.639 | 99.99th=[ 955] 00:37:39.639 bw ( KiB/s): min= 4096, max= 4096, per=46.20%, avg=4096.00, stdev= 0.00, samples=1 00:37:39.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:39.639 lat (usec) : 500=8.74%, 750=44.22%, 1000=24.29% 00:37:39.639 lat (msec) : 2=21.59%, 50=1.16% 00:37:39.639 cpu : usr=1.80%, sys=2.90%, ctx=778, majf=0, minf=2 00:37:39.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.639 issued rwts: total=266,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:39.639 job2: (groupid=0, jobs=1): err= 0: pid=1199602: Fri Nov 29 13:21:41 2024 00:37:39.639 read: IOPS=15, BW=62.4KiB/s (63.9kB/s)(64.0KiB/1025msec) 00:37:39.639 slat (nsec): min=26176, max=26758, avg=26405.00, stdev=187.83 00:37:39.639 clat (usec): min=41839, max=42112, avg=41954.41, stdev=67.86 00:37:39.639 lat (usec): min=41866, max=42139, avg=41980.81, stdev=67.88 00:37:39.639 clat percentiles (usec): 00:37:39.639 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:37:39.639 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:39.639 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:39.639 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:39.639 | 99.99th=[42206] 00:37:39.639 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:37:39.639 slat (nsec): min=10301, max=52513, avg=31780.71, stdev=9056.42 00:37:39.639 clat (usec): min=299, max=989, avg=649.81, stdev=117.36 00:37:39.639 lat (usec): min=310, max=1024, avg=681.59, stdev=121.57 00:37:39.639 clat percentiles (usec): 00:37:39.639 | 1.00th=[ 347], 5.00th=[ 429], 10.00th=[ 494], 20.00th=[ 553], 00:37:39.639 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 652], 60.00th=[ 701], 00:37:39.639 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 816], 00:37:39.639 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 988], 00:37:39.639 | 99.99th=[ 988] 00:37:39.639 bw ( KiB/s): min= 4096, max= 4096, per=46.20%, avg=4096.00, stdev= 0.00, samples=1 00:37:39.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:39.639 lat (usec) : 500=11.17%, 750=68.94%, 1000=16.86% 00:37:39.639 lat (msec) : 50=3.03% 00:37:39.639 cpu : usr=0.88%, sys=1.46%, ctx=529, majf=0, minf=1 00:37:39.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.639 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:39.639 job3: (groupid=0, jobs=1): err= 0: pid=1199603: Fri Nov 29 13:21:41 2024 00:37:39.639 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:37:39.639 slat (nsec): min=7938, max=62672, avg=27573.23, stdev=2673.26 00:37:39.639 clat (usec): min=622, max=4284, avg=959.12, stdev=167.80 00:37:39.640 lat (usec): min=649, max=4312, avg=986.69, stdev=167.72 00:37:39.640 clat percentiles (usec): 00:37:39.640 | 1.00th=[ 709], 5.00th=[ 799], 10.00th=[ 840], 20.00th=[ 906], 00:37:39.640 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:37:39.640 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1057], 00:37:39.640 | 99.00th=[ 1123], 99.50th=[ 1221], 99.90th=[ 4293], 99.95th=[ 4293], 00:37:39.640 | 99.99th=[ 4293] 00:37:39.640 write: IOPS=735, BW=2941KiB/s (3012kB/s)(2944KiB/1001msec); 0 zone resets 00:37:39.640 slat (nsec): min=9513, max=70074, avg=33792.71, stdev=8679.34 00:37:39.640 clat (usec): min=240, max=3807, avg=625.34, stdev=167.86 00:37:39.640 lat (usec): min=253, max=3843, avg=659.14, stdev=169.87 00:37:39.640 clat percentiles (usec): 00:37:39.640 | 1.00th=[ 306], 5.00th=[ 416], 10.00th=[ 461], 20.00th=[ 523], 00:37:39.640 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 660], 00:37:39.640 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:37:39.640 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[ 3818], 99.95th=[ 3818], 00:37:39.640 | 99.99th=[ 3818] 00:37:39.640 bw ( KiB/s): min= 4096, max= 4096, per=46.20%, avg=4096.00, stdev= 0.00, samples=1 00:37:39.640 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:39.640 lat (usec) : 250=0.08%, 500=9.46%, 750=42.71%, 1000=38.46% 00:37:39.640 lat (msec) : 2=9.13%, 4=0.08%, 10=0.08% 00:37:39.640 cpu : usr=2.40%, sys=5.40%, ctx=1249, majf=0, minf=1 00:37:39.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.640 issued rwts: total=512,736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:39.640 00:37:39.640 Run status group 0 (all jobs): 00:37:39.640 READ: bw=3165KiB/s (3241kB/s), 62.4KiB/s-2046KiB/s (63.9kB/s-2095kB/s), io=3244KiB (3322kB), run=1001-1025msec 00:37:39.640 WRITE: bw=8866KiB/s (9079kB/s), 1998KiB/s-2941KiB/s (2046kB/s-3012kB/s), io=9088KiB (9306kB), run=1001-1025msec 00:37:39.640 00:37:39.640 Disk stats (read/write): 00:37:39.640 nvme0n1: ios=64/512, merge=0/0, ticks=1355/284, in_queue=1639, util=96.19% 00:37:39.640 nvme0n2: ios=145/512, merge=0/0, ticks=522/278, in_queue=800, util=87.64% 00:37:39.640 nvme0n3: ios=33/512, merge=0/0, ticks=1385/316, in_queue=1701, util=96.83% 00:37:39.640 nvme0n4: ios=513/512, merge=0/0, ticks=1397/250, in_queue=1647, util=96.47% 00:37:39.640 13:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:37:39.640 [global] 00:37:39.640 thread=1 00:37:39.640 invalidate=1 00:37:39.640 rw=randwrite 00:37:39.640 time_based=1 00:37:39.640 runtime=1 00:37:39.640 ioengine=libaio 00:37:39.640 direct=1 00:37:39.640 bs=4096 00:37:39.640 iodepth=1 00:37:39.640 norandommap=0 00:37:39.640 numjobs=1 00:37:39.640 00:37:39.640 verify_dump=1 00:37:39.640 verify_backlog=512 00:37:39.640 verify_state_save=0 00:37:39.640 do_verify=1 00:37:39.640 verify=crc32c-intel 00:37:39.640 [job0] 00:37:39.640 filename=/dev/nvme0n1 00:37:39.640 [job1] 00:37:39.640 filename=/dev/nvme0n2 00:37:39.640 [job2] 00:37:39.640 filename=/dev/nvme0n3 00:37:39.640 [job3] 00:37:39.640 filename=/dev/nvme0n4 00:37:39.640 Could not set queue depth (nvme0n1) 00:37:39.640 Could not set queue depth (nvme0n2) 00:37:39.640 Could not set queue depth (nvme0n3) 00:37:39.640 Could not set queue depth (nvme0n4) 00:37:39.900 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:39.900 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:39.900 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:39.900 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:39.900 fio-3.35 00:37:39.900 Starting 4 threads 00:37:41.293 00:37:41.293 job0: (groupid=0, jobs=1): err= 0: pid=1200024: Fri Nov 29 13:21:43 2024 00:37:41.293 read: IOPS=512, BW=2050KiB/s (2099kB/s)(2052KiB/1001msec) 00:37:41.293 slat (nsec): min=6816, max=46282, avg=26774.04, stdev=5413.89 00:37:41.293 clat (usec): min=549, max=1073, avg=830.84, stdev=87.96 00:37:41.293 lat (usec): min=561, max=1101, avg=857.61, stdev=89.33 00:37:41.293 clat percentiles (usec): 00:37:41.293 | 1.00th=[ 594], 5.00th=[ 660], 10.00th=[ 709], 20.00th=[ 758], 00:37:41.293 | 30.00th=[ 791], 40.00th=[ 824], 50.00th=[ 848], 60.00th=[ 865], 00:37:41.293 | 70.00th=[ 881], 80.00th=[ 898], 90.00th=[ 922], 95.00th=[ 947], 00:37:41.293 | 99.00th=[ 1004], 99.50th=[ 1074], 99.90th=[ 1074], 99.95th=[ 1074], 00:37:41.293 | 99.99th=[ 1074] 00:37:41.293 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:37:41.293 slat (nsec): min=9400, max=68457, avg=33650.75, stdev=7500.81 00:37:41.293 clat (usec): min=188, max=840, avg=500.99, stdev=107.71 00:37:41.293 lat (usec): min=198, max=875, avg=534.64, stdev=109.61 00:37:41.293 clat percentiles (usec): 00:37:41.293 | 1.00th=[ 243], 5.00th=[ 334], 10.00th=[ 363], 20.00th=[ 404], 00:37:41.293 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 498], 60.00th=[ 529], 00:37:41.293 | 70.00th=[ 562], 80.00th=[ 594], 90.00th=[ 644], 95.00th=[ 676], 00:37:41.293 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 783], 99.95th=[ 840], 00:37:41.293 | 99.99th=[ 840] 00:37:41.293 bw ( KiB/s): min= 4096, max= 4096, per=29.64%, avg=4096.00, stdev= 0.00, samples=1 00:37:41.293 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:41.293 lat (usec) : 250=0.91%, 500=33.12%, 750=38.39%, 1000=27.13% 00:37:41.293 lat (msec) : 2=0.46% 00:37:41.293 cpu : usr=3.10%, sys=6.50%, ctx=1540, majf=0, minf=1 00:37:41.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.294 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:41.294 job1: (groupid=0, jobs=1): err= 0: pid=1200039: Fri Nov 29 13:21:43 2024 00:37:41.294 read: IOPS=19, BW=77.4KiB/s (79.2kB/s)(80.0KiB/1034msec) 00:37:41.294 slat (nsec): min=25947, max=26736, avg=26299.05, stdev=213.46 00:37:41.294 clat (usec): min=40805, max=41971, avg=41140.34, stdev=371.57 00:37:41.294 lat (usec): min=40832, max=41997, avg=41166.64, stdev=371.57 00:37:41.294 clat percentiles (usec): 00:37:41.294 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:37:41.294 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:41.294 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:37:41.294 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:41.294 | 99.99th=[42206] 00:37:41.294 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:37:41.294 slat (nsec): min=9217, max=49496, avg=27028.73, stdev=10109.11 00:37:41.294 clat (usec): min=178, max=683, avg=377.36, stdev=88.85 00:37:41.294 lat (usec): min=210, max=715, avg=404.39, stdev=90.66 00:37:41.294 clat percentiles (usec): 00:37:41.294 | 1.00th=[ 208], 5.00th=[ 241], 10.00th=[ 277], 20.00th=[ 310], 00:37:41.294 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 383], 00:37:41.294 | 70.00th=[ 412], 80.00th=[ 453], 90.00th=[ 502], 95.00th=[ 545], 00:37:41.294 | 99.00th=[ 603], 99.50th=[ 644], 99.90th=[ 685], 99.95th=[ 685], 00:37:41.294 | 99.99th=[ 685] 00:37:41.294 bw ( KiB/s): min= 4096, max= 4096, per=29.64%, avg=4096.00, stdev= 0.00, samples=1 00:37:41.294 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:41.294 lat (usec) : 250=6.39%, 500=79.70%, 750=10.15% 00:37:41.294 lat (msec) : 50=3.76% 00:37:41.294 cpu : usr=0.87%, sys=1.16%, ctx=532, majf=0, minf=1 00:37:41.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.294 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:41.294 job2: (groupid=0, jobs=1): err= 0: pid=1200056: Fri Nov 29 13:21:43 2024 00:37:41.294 read: IOPS=17, BW=69.6KiB/s (71.3kB/s)(72.0KiB/1034msec) 00:37:41.294 slat (nsec): min=25468, max=26198, avg=25694.72, stdev=185.73 00:37:41.294 clat (usec): min=1218, max=42076, avg=39569.47, stdev=9576.23 00:37:41.294 lat (usec): min=1244, max=42101, avg=39595.17, stdev=9576.20 00:37:41.294 clat percentiles (usec): 00:37:41.294 | 1.00th=[ 1221], 5.00th=[ 1221], 10.00th=[41157], 20.00th=[41681], 00:37:41.294 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:41.294 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:41.294 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:41.294 | 99.99th=[42206] 00:37:41.294 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:37:41.294 slat (nsec): min=9453, max=52483, avg=30670.53, stdev=7329.41 00:37:41.294 clat (usec): min=282, max=1115, avg=589.04, stdev=142.11 00:37:41.294 lat (usec): min=314, max=1147, avg=619.71, stdev=143.64 00:37:41.294 clat percentiles (usec): 00:37:41.294 | 1.00th=[ 314], 5.00th=[ 379], 10.00th=[ 416], 20.00th=[ 465], 00:37:41.294 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 619], 00:37:41.294 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 783], 95.00th=[ 848], 00:37:41.294 | 99.00th=[ 963], 99.50th=[ 971], 99.90th=[ 1123], 99.95th=[ 1123], 00:37:41.294 | 99.99th=[ 1123] 00:37:41.294 bw ( KiB/s): min= 4096, max= 4096, per=29.64%, avg=4096.00, stdev= 0.00, samples=1 00:37:41.294 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:41.294 lat (usec) : 500=29.43%, 750=53.58%, 1000=13.21% 00:37:41.294 lat (msec) : 2=0.57%, 50=3.21% 00:37:41.294 cpu : usr=1.06%, sys=1.26%, ctx=530, majf=0, minf=1 00:37:41.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.294 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:41.294 job3: (groupid=0, jobs=1): err= 0: pid=1200062: Fri Nov 29 13:21:43 2024 00:37:41.294 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:37:41.294 slat (nsec): min=6455, max=55281, avg=23202.46, stdev=7453.30 00:37:41.294 clat (usec): min=174, max=792, avg=461.96, stdev=102.60 00:37:41.294 lat (usec): min=183, max=818, avg=485.16, stdev=105.59 00:37:41.294 clat percentiles (usec): 00:37:41.294 | 1.00th=[ 227], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 371], 00:37:41.294 | 30.00th=[ 412], 40.00th=[ 457], 50.00th=[ 478], 60.00th=[ 498], 00:37:41.294 | 70.00th=[ 519], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 619], 00:37:41.294 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 775], 99.95th=[ 791], 00:37:41.294 | 99.99th=[ 791] 00:37:41.294 write: IOPS=1526, BW=6106KiB/s (6252kB/s)(6112KiB/1001msec); 0 zone resets 00:37:41.294 slat (nsec): min=9159, max=61380, avg=24771.52, stdev=10922.33 00:37:41.294 clat (usec): min=104, max=827, avg=293.07, stdev=153.99 00:37:41.294 lat (usec): min=114, max=888, avg=317.84, stdev=160.34 00:37:41.294 clat percentiles (usec): 00:37:41.294 | 1.00th=[ 110], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 129], 00:37:41.294 | 30.00th=[ 194], 40.00th=[ 239], 50.00th=[ 269], 60.00th=[ 302], 00:37:41.294 | 70.00th=[ 355], 80.00th=[ 420], 90.00th=[ 515], 95.00th=[ 594], 00:37:41.294 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 799], 99.95th=[ 824], 00:37:41.294 | 99.99th=[ 824] 00:37:41.294 bw ( KiB/s): min= 6384, max= 6384, per=46.19%, avg=6384.00, stdev= 0.00, samples=1 00:37:41.294 iops : min= 1596, max= 1596, avg=1596.00, stdev= 0.00, samples=1 00:37:41.294 lat (usec) : 250=27.19%, 500=50.16%, 750=22.34%, 1000=0.31% 00:37:41.294 cpu : usr=2.90%, sys=6.90%, ctx=2552, majf=0, minf=1 00:37:41.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.294 issued rwts: total=1024,1528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:41.294 00:37:41.294 Run status group 0 (all jobs): 00:37:41.294 READ: bw=6093KiB/s (6239kB/s), 69.6KiB/s-4092KiB/s (71.3kB/s-4190kB/s), io=6300KiB (6451kB), run=1001-1034msec 00:37:41.294 WRITE: bw=13.5MiB/s (14.2MB/s), 1979KiB/s-6106KiB/s (2026kB/s-6252kB/s), io=14.0MiB (14.6MB), run=1001-1035msec 00:37:41.294 00:37:41.294 Disk stats (read/write): 00:37:41.294 nvme0n1: ios=538/713, merge=0/0, ticks=1349/266, in_queue=1615, util=96.59% 00:37:41.294 nvme0n2: ios=54/512, merge=0/0, ticks=679/184, in_queue=863, util=88.58% 00:37:41.294 nvme0n3: ios=13/512, merge=0/0, ticks=502/284, in_queue=786, util=88.38% 00:37:41.294 nvme0n4: ios=1024/1066, merge=0/0, ticks=443/282, in_queue=725, util=89.52% 00:37:41.294 13:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:37:41.294 [global] 00:37:41.294 thread=1 00:37:41.294 invalidate=1 00:37:41.294 rw=write 00:37:41.294 time_based=1 00:37:41.294 runtime=1 00:37:41.294 ioengine=libaio 00:37:41.294 direct=1 00:37:41.294 bs=4096 00:37:41.294 iodepth=128 00:37:41.294 norandommap=0 00:37:41.294 numjobs=1 00:37:41.294 00:37:41.294 verify_dump=1 00:37:41.294 verify_backlog=512 00:37:41.294 verify_state_save=0 00:37:41.294 do_verify=1 00:37:41.294 verify=crc32c-intel 00:37:41.294 [job0] 00:37:41.294 filename=/dev/nvme0n1 00:37:41.294 [job1] 00:37:41.294 filename=/dev/nvme0n2 00:37:41.294 [job2] 00:37:41.294 filename=/dev/nvme0n3 00:37:41.294 [job3] 00:37:41.294 filename=/dev/nvme0n4 00:37:41.294 Could not set queue depth (nvme0n1) 00:37:41.294 Could not set queue depth (nvme0n2) 00:37:41.294 Could not set queue depth (nvme0n3) 00:37:41.294 Could not set queue depth (nvme0n4) 00:37:41.555 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:41.555 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:41.555 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:41.555 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:41.555 fio-3.35 00:37:41.555 Starting 4 threads 00:37:42.937 00:37:42.937 job0: (groupid=0, jobs=1): err= 0: pid=1200527: Fri Nov 29 13:21:45 2024 00:37:42.937 read: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:37:42.937 slat (nsec): min=993, max=7751.6k, avg=61895.90, stdev=497901.56 00:37:42.937 clat (usec): min=2378, max=15391, avg=8254.47, stdev=1953.47 00:37:42.937 lat (usec): min=3999, max=17272, avg=8316.36, stdev=1990.11 00:37:42.937 clat percentiles (usec): 00:37:42.937 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6980], 00:37:42.937 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8094], 00:37:42.937 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[11469], 95.00th=[12518], 00:37:42.937 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15008], 99.95th=[15401], 00:37:42.937 | 99.99th=[15401] 00:37:42.937 write: IOPS=8207, BW=32.1MiB/s (33.6MB/s)(32.2MiB/1004msec); 0 zone resets 00:37:42.937 slat (nsec): min=1704, max=6798.7k, avg=54219.58, stdev=405915.02 00:37:42.937 clat (usec): min=802, max=15383, avg=7232.15, stdev=1993.43 00:37:42.937 lat (usec): min=864, max=15386, avg=7286.37, stdev=2005.47 00:37:42.937 clat percentiles (usec): 00:37:42.937 | 1.00th=[ 2704], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5211], 00:37:42.937 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7767], 00:37:42.937 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[10159], 95.00th=[10814], 00:37:42.937 | 99.00th=[11600], 99.50th=[12256], 99.90th=[14091], 99.95th=[14877], 00:37:42.937 | 99.99th=[15401] 00:37:42.937 bw ( KiB/s): min=32008, max=33528, per=29.17%, avg=32768.00, stdev=1074.80, samples=2 00:37:42.937 iops : min= 8002, max= 8382, avg=8192.00, stdev=268.70, samples=2 00:37:42.937 lat (usec) : 1000=0.02% 00:37:42.937 lat (msec) : 2=0.27%, 4=1.27%, 10=84.72%, 20=13.72% 00:37:42.937 cpu : usr=5.88%, sys=8.47%, ctx=498, majf=0, minf=1 00:37:42.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:37:42.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:42.937 issued rwts: total=8192,8240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:42.938 job1: (groupid=0, jobs=1): err= 0: pid=1200540: Fri Nov 29 13:21:45 2024 00:37:42.938 read: IOPS=7338, BW=28.7MiB/s (30.1MB/s)(28.8MiB/1004msec) 00:37:42.938 slat (nsec): min=947, max=7571.6k, avg=65249.73, stdev=426136.94 00:37:42.938 clat (usec): min=1280, max=15746, avg=8436.35, stdev=1273.22 00:37:42.938 lat (usec): min=4689, max=15759, avg=8501.60, stdev=1313.80 00:37:42.938 clat percentiles (usec): 00:37:42.938 | 1.00th=[ 5145], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7439], 00:37:42.938 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:37:42.938 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10028], 95.00th=[10814], 00:37:42.938 | 99.00th=[11863], 99.50th=[12387], 99.90th=[13960], 99.95th=[13960], 00:37:42.938 | 99.99th=[15795] 00:37:42.938 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:37:42.938 slat (nsec): min=1619, max=14125k, avg=62333.97, stdev=365568.69 00:37:42.938 clat (usec): min=3460, max=19673, avg=8185.00, stdev=1289.48 00:37:42.938 lat (usec): min=3463, max=19689, avg=8247.33, stdev=1313.16 00:37:42.938 clat percentiles (usec): 00:37:42.938 | 1.00th=[ 4883], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7373], 00:37:42.938 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8356], 00:37:42.938 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[10290], 00:37:42.938 | 99.00th=[12256], 99.50th=[13304], 99.90th=[19530], 99.95th=[19792], 00:37:42.938 | 99.99th=[19792] 00:37:42.938 bw ( KiB/s): min=30456, max=30984, per=27.35%, avg=30720.00, stdev=373.35, samples=2 00:37:42.938 iops : min= 7614, max= 7746, avg=7680.00, stdev=93.34, samples=2 00:37:42.938 lat (msec) : 2=0.01%, 4=0.05%, 10=92.03%, 20=7.91% 00:37:42.938 cpu : usr=4.39%, sys=8.28%, ctx=755, majf=0, minf=1 00:37:42.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:37:42.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:42.938 issued rwts: total=7368,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:42.938 job2: (groupid=0, jobs=1): err= 0: pid=1200556: Fri Nov 29 13:21:45 2024 00:37:42.938 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:37:42.938 slat (nsec): min=917, max=6508.6k, avg=77937.92, stdev=512415.47 00:37:42.938 clat (usec): min=5591, max=19485, avg=9847.06, stdev=1695.30 00:37:42.938 lat (usec): min=5597, max=19493, avg=9925.00, stdev=1748.37 00:37:42.938 clat percentiles (usec): 00:37:42.938 | 1.00th=[ 5932], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8979], 00:37:42.938 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:37:42.938 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11863], 95.00th=[12911], 00:37:42.938 | 99.00th=[15926], 99.50th=[16712], 99.90th=[18482], 99.95th=[19530], 00:37:42.938 | 99.99th=[19530] 00:37:42.938 write: IOPS=6265, BW=24.5MiB/s (25.7MB/s)(24.5MiB/1003msec); 0 zone resets 00:37:42.938 slat (nsec): min=1565, max=7896.3k, avg=77837.69, stdev=429633.93 00:37:42.938 clat (usec): min=1144, max=70292, avg=10626.29, stdev=7647.02 00:37:42.938 lat (usec): min=1170, max=70295, avg=10704.13, stdev=7690.52 00:37:42.938 clat percentiles (usec): 00:37:42.938 | 1.00th=[ 4686], 5.00th=[ 5473], 10.00th=[ 7177], 20.00th=[ 8356], 00:37:42.938 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:37:42.938 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[13173], 95.00th=[16450], 00:37:42.938 | 99.00th=[58983], 99.50th=[66847], 99.90th=[70779], 99.95th=[70779], 00:37:42.938 | 99.99th=[70779] 00:37:42.938 bw ( KiB/s): min=24576, max=24880, per=22.01%, avg=24728.00, stdev=214.96, samples=2 00:37:42.938 iops : min= 6144, max= 6220, avg=6182.00, stdev=53.74, samples=2 00:37:42.938 lat (msec) : 2=0.12%, 4=0.31%, 10=71.89%, 20=25.97%, 50=0.96% 00:37:42.938 lat (msec) : 100=0.76% 00:37:42.938 cpu : usr=4.29%, sys=5.99%, ctx=642, majf=0, minf=2 00:37:42.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:42.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:42.938 issued rwts: total=6144,6284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:42.938 job3: (groupid=0, jobs=1): err= 0: pid=1200562: Fri Nov 29 13:21:45 2024 00:37:42.938 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:37:42.938 slat (nsec): min=1001, max=6264.6k, avg=78425.86, stdev=504053.75 00:37:42.938 clat (usec): min=3178, max=19056, avg=10014.58, stdev=1408.91 00:37:42.938 lat (usec): min=3186, max=19060, avg=10093.01, stdev=1459.08 00:37:42.938 clat percentiles (usec): 00:37:42.938 | 1.00th=[ 6521], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 9241], 00:37:42.938 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:37:42.938 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11338], 95.00th=[12780], 00:37:42.938 | 99.00th=[14353], 99.50th=[14484], 99.90th=[16450], 99.95th=[19006], 00:37:42.938 | 99.99th=[19006] 00:37:42.938 write: IOPS=6053, BW=23.6MiB/s (24.8MB/s)(23.8MiB/1008msec); 0 zone resets 00:37:42.938 slat (nsec): min=1714, max=7817.1k, avg=84332.38, stdev=501708.66 00:37:42.938 clat (usec): min=1150, max=61893, avg=11671.44, stdev=7708.33 00:37:42.938 lat (usec): min=1160, max=61896, avg=11755.77, stdev=7757.62 00:37:42.938 clat percentiles (usec): 00:37:42.938 | 1.00th=[ 1713], 5.00th=[ 5342], 10.00th=[ 6718], 20.00th=[ 8291], 00:37:42.938 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:37:42.938 | 70.00th=[10290], 80.00th=[11207], 90.00th=[17695], 95.00th=[28443], 00:37:42.938 | 99.00th=[48497], 99.50th=[55313], 99.90th=[62129], 99.95th=[62129], 00:37:42.938 | 99.99th=[62129] 00:37:42.938 bw ( KiB/s): min=23224, max=24576, per=21.28%, avg=23900.00, stdev=956.01, samples=2 00:37:42.938 iops : min= 5806, max= 6144, avg=5975.00, stdev=239.00, samples=2 00:37:42.938 lat (msec) : 2=0.63%, 4=1.00%, 10=50.55%, 20=42.99%, 50=4.50% 00:37:42.938 lat (msec) : 100=0.33% 00:37:42.938 cpu : usr=4.67%, sys=5.96%, ctx=476, majf=0, minf=2 00:37:42.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:42.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:42.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:42.938 issued rwts: total=5632,6102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:42.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:42.938 00:37:42.938 Run status group 0 (all jobs): 00:37:42.938 READ: bw=106MiB/s (111MB/s), 21.8MiB/s-31.9MiB/s (22.9MB/s-33.4MB/s), io=107MiB (112MB), run=1003-1008msec 00:37:42.938 WRITE: bw=110MiB/s (115MB/s), 23.6MiB/s-32.1MiB/s (24.8MB/s-33.6MB/s), io=111MiB (116MB), run=1003-1008msec 00:37:42.938 00:37:42.938 Disk stats (read/write): 00:37:42.938 nvme0n1: ios=6682/6864, merge=0/0, ticks=54045/47550, in_queue=101595, util=96.49% 00:37:42.938 nvme0n2: ios=6161/6327, merge=0/0, ticks=27297/25386, in_queue=52683, util=96.73% 00:37:42.938 nvme0n3: ios=5166/5199, merge=0/0, ticks=26384/29649, in_queue=56033, util=96.62% 00:37:42.938 nvme0n4: ios=4937/5120, merge=0/0, ticks=26634/31672, in_queue=58306, util=97.65% 00:37:42.938 13:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:42.938 [global] 00:37:42.938 thread=1 00:37:42.938 invalidate=1 00:37:42.938 rw=randwrite 00:37:42.938 time_based=1 00:37:42.938 runtime=1 00:37:42.938 ioengine=libaio 00:37:42.938 direct=1 00:37:42.938 bs=4096 00:37:42.938 iodepth=128 00:37:42.938 norandommap=0 00:37:42.938 numjobs=1 00:37:42.938 00:37:42.938 verify_dump=1 00:37:42.938 verify_backlog=512 00:37:42.938 verify_state_save=0 00:37:42.938 do_verify=1 00:37:42.938 verify=crc32c-intel 00:37:42.938 [job0] 00:37:42.938 filename=/dev/nvme0n1 00:37:42.938 [job1] 00:37:42.938 filename=/dev/nvme0n2 00:37:42.938 [job2] 00:37:42.938 filename=/dev/nvme0n3 00:37:42.938 [job3] 00:37:42.938 filename=/dev/nvme0n4 00:37:42.938 Could not set queue depth (nvme0n1) 00:37:42.938 Could not set queue depth (nvme0n2) 00:37:42.938 Could not set queue depth (nvme0n3) 00:37:42.938 Could not set queue depth (nvme0n4) 00:37:43.198 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:43.198 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:43.198 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:43.198 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:43.198 fio-3.35 00:37:43.198 Starting 4 threads 00:37:44.604 00:37:44.604 job0: (groupid=0, jobs=1): err= 0: pid=1201011: Fri Nov 29 13:21:46 2024 00:37:44.604 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:37:44.604 slat (nsec): min=887, max=15996k, avg=87785.75, stdev=682634.55 00:37:44.604 clat (usec): min=2364, max=45696, avg=11349.68, stdev=7072.02 00:37:44.604 lat (usec): min=2370, max=52609, avg=11437.46, stdev=7128.87 00:37:44.604 clat percentiles (usec): 00:37:44.604 | 1.00th=[ 4490], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6456], 00:37:44.604 | 30.00th=[ 7767], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10290], 00:37:44.604 | 70.00th=[10945], 80.00th=[13698], 90.00th=[20841], 95.00th=[27132], 00:37:44.604 | 99.00th=[38536], 99.50th=[39060], 99.90th=[44827], 99.95th=[44827], 00:37:44.604 | 99.99th=[45876] 00:37:44.604 write: IOPS=5516, BW=21.5MiB/s (22.6MB/s)(21.7MiB/1006msec); 0 zone resets 00:37:44.604 slat (nsec): min=1523, max=16095k, avg=93977.03, stdev=637011.08 00:37:44.604 clat (usec): min=2099, max=48063, avg=12489.71, stdev=8167.09 00:37:44.604 lat (usec): min=2105, max=48072, avg=12583.69, stdev=8225.77 00:37:44.604 clat percentiles (usec): 00:37:44.604 | 1.00th=[ 3326], 5.00th=[ 4555], 10.00th=[ 5473], 20.00th=[ 6456], 00:37:44.604 | 30.00th=[ 7111], 40.00th=[ 8094], 50.00th=[ 9241], 60.00th=[10814], 00:37:44.604 | 70.00th=[14222], 80.00th=[17957], 90.00th=[23725], 95.00th=[29754], 00:37:44.604 | 99.00th=[39060], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:37:44.604 | 99.99th=[47973] 00:37:44.604 bw ( KiB/s): min=16384, max=27000, per=21.95%, avg=21692.00, stdev=7506.65, samples=2 00:37:44.604 iops : min= 4096, max= 6750, avg=5423.00, stdev=1876.66, samples=2 00:37:44.604 lat (msec) : 4=1.75%, 10=52.84%, 20=31.36%, 50=14.05% 00:37:44.604 cpu : usr=3.68%, sys=5.07%, ctx=399, majf=0, minf=1 00:37:44.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:37:44.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:44.604 issued rwts: total=5120,5550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:44.604 job1: (groupid=0, jobs=1): err= 0: pid=1201023: Fri Nov 29 13:21:46 2024 00:37:44.604 read: IOPS=5496, BW=21.5MiB/s (22.5MB/s)(21.7MiB/1012msec) 00:37:44.604 slat (nsec): min=939, max=13514k, avg=83468.35, stdev=677009.84 00:37:44.604 clat (usec): min=1861, max=38200, avg=11742.87, stdev=5235.41 00:37:44.604 lat (usec): min=2052, max=40050, avg=11826.34, stdev=5282.18 00:37:44.604 clat percentiles (usec): 00:37:44.604 | 1.00th=[ 4424], 5.00th=[ 6128], 10.00th=[ 7308], 20.00th=[ 8160], 00:37:44.604 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10290], 60.00th=[11076], 00:37:44.604 | 70.00th=[12256], 80.00th=[15008], 90.00th=[19792], 95.00th=[21365], 00:37:44.604 | 99.00th=[29754], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:37:44.604 | 99.99th=[38011] 00:37:44.604 write: IOPS=5565, BW=21.7MiB/s (22.8MB/s)(22.0MiB/1012msec); 0 zone resets 00:37:44.604 slat (nsec): min=1565, max=17583k, avg=74349.94, stdev=526971.52 00:37:44.604 clat (usec): min=977, max=38015, avg=11175.00, stdev=6363.70 00:37:44.604 lat (usec): min=986, max=38038, avg=11249.35, stdev=6407.51 00:37:44.604 clat percentiles (usec): 00:37:44.604 | 1.00th=[ 1598], 5.00th=[ 3687], 10.00th=[ 5080], 20.00th=[ 6259], 00:37:44.604 | 30.00th=[ 7242], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[10028], 00:37:44.604 | 70.00th=[13566], 80.00th=[16909], 90.00th=[21103], 95.00th=[23200], 00:37:44.604 | 99.00th=[30016], 99.50th=[30278], 99.90th=[35390], 99.95th=[35914], 00:37:44.604 | 99.99th=[38011] 00:37:44.604 bw ( KiB/s): min=20480, max=24576, per=22.79%, avg=22528.00, stdev=2896.31, samples=2 00:37:44.604 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:37:44.604 lat (usec) : 1000=0.10% 00:37:44.604 lat (msec) : 2=0.55%, 4=2.80%, 10=50.75%, 20=33.43%, 50=12.37% 00:37:44.604 cpu : usr=3.56%, sys=6.43%, ctx=417, majf=0, minf=1 00:37:44.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:37:44.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:44.604 issued rwts: total=5562,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:44.604 job2: (groupid=0, jobs=1): err= 0: pid=1201035: Fri Nov 29 13:21:46 2024 00:37:44.604 read: IOPS=7207, BW=28.2MiB/s (29.5MB/s)(28.5MiB/1012msec) 00:37:44.604 slat (nsec): min=959, max=11021k, avg=67044.37, stdev=522906.44 00:37:44.604 clat (usec): min=1764, max=31918, avg=9355.83, stdev=4107.34 00:37:44.605 lat (usec): min=1774, max=31926, avg=9422.87, stdev=4124.36 00:37:44.605 clat percentiles (usec): 00:37:44.605 | 1.00th=[ 2212], 5.00th=[ 4293], 10.00th=[ 5997], 20.00th=[ 7046], 00:37:44.605 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8979], 00:37:44.605 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[13698], 95.00th=[17433], 00:37:44.605 | 99.00th=[26346], 99.50th=[28967], 99.90th=[31327], 99.95th=[31851], 00:37:44.605 | 99.99th=[31851] 00:37:44.605 write: IOPS=7588, BW=29.6MiB/s (31.1MB/s)(30.0MiB/1012msec); 0 zone resets 00:37:44.605 slat (nsec): min=1598, max=10025k, avg=58396.99, stdev=480368.61 00:37:44.605 clat (usec): min=536, max=31891, avg=7815.61, stdev=2787.90 00:37:44.605 lat (usec): min=548, max=31893, avg=7874.01, stdev=2805.64 00:37:44.605 clat percentiles (usec): 00:37:44.605 | 1.00th=[ 1893], 5.00th=[ 4047], 10.00th=[ 4883], 20.00th=[ 5407], 00:37:44.605 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8029], 00:37:44.605 | 70.00th=[ 8225], 80.00th=[ 9241], 90.00th=[11600], 95.00th=[12256], 00:37:44.605 | 99.00th=[16319], 99.50th=[18482], 99.90th=[23987], 99.95th=[24249], 00:37:44.605 | 99.99th=[31851] 00:37:44.605 bw ( KiB/s): min=28656, max=32768, per=31.07%, avg=30712.00, stdev=2907.62, samples=2 00:37:44.605 iops : min= 7164, max= 8192, avg=7678.00, stdev=726.91, samples=2 00:37:44.605 lat (usec) : 750=0.01%, 1000=0.02% 00:37:44.605 lat (msec) : 2=1.14%, 4=3.24%, 10=72.21%, 20=21.60%, 50=1.79% 00:37:44.605 cpu : usr=4.65%, sys=7.91%, ctx=408, majf=0, minf=2 00:37:44.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:37:44.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:44.605 issued rwts: total=7294,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:44.605 job3: (groupid=0, jobs=1): err= 0: pid=1201044: Fri Nov 29 13:21:46 2024 00:37:44.605 read: IOPS=5866, BW=22.9MiB/s (24.0MB/s)(23.2MiB/1011msec) 00:37:44.605 slat (nsec): min=1017, max=13174k, avg=77703.63, stdev=628821.98 00:37:44.605 clat (usec): min=1999, max=37334, avg=10092.42, stdev=4254.61 00:37:44.605 lat (usec): min=3593, max=39323, avg=10170.12, stdev=4294.93 00:37:44.605 clat percentiles (usec): 00:37:44.605 | 1.00th=[ 4113], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7308], 00:37:44.605 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:37:44.605 | 70.00th=[10159], 80.00th=[11863], 90.00th=[15008], 95.00th=[19006], 00:37:44.605 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29754], 00:37:44.605 | 99.99th=[37487] 00:37:44.605 write: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec); 0 zone resets 00:37:44.605 slat (nsec): min=1604, max=33396k, avg=81656.70, stdev=731096.15 00:37:44.605 clat (usec): min=1298, max=45119, avg=11121.47, stdev=7285.72 00:37:44.605 lat (usec): min=1310, max=45128, avg=11203.12, stdev=7328.18 00:37:44.605 clat percentiles (usec): 00:37:44.605 | 1.00th=[ 3884], 5.00th=[ 4948], 10.00th=[ 5604], 20.00th=[ 6456], 00:37:44.605 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8586], 60.00th=[10028], 00:37:44.605 | 70.00th=[11207], 80.00th=[13042], 90.00th=[21365], 95.00th=[30016], 00:37:44.605 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:37:44.605 | 99.99th=[45351] 00:37:44.605 bw ( KiB/s): min=19856, max=29296, per=24.86%, avg=24576.00, stdev=6675.09, samples=2 00:37:44.605 iops : min= 4964, max= 7324, avg=6144.00, stdev=1668.77, samples=2 00:37:44.605 lat (msec) : 2=0.17%, 4=0.66%, 10=62.19%, 20=29.13%, 50=7.86% 00:37:44.605 cpu : usr=3.66%, sys=6.93%, ctx=277, majf=0, minf=1 00:37:44.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:44.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:44.605 issued rwts: total=5931,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:44.605 00:37:44.605 Run status group 0 (all jobs): 00:37:44.605 READ: bw=92.3MiB/s (96.8MB/s), 19.9MiB/s-28.2MiB/s (20.8MB/s-29.5MB/s), io=93.4MiB (97.9MB), run=1006-1012msec 00:37:44.605 WRITE: bw=96.5MiB/s (101MB/s), 21.5MiB/s-29.6MiB/s (22.6MB/s-31.1MB/s), io=97.7MiB (102MB), run=1006-1012msec 00:37:44.605 00:37:44.605 Disk stats (read/write): 00:37:44.605 nvme0n1: ios=3634/3770, merge=0/0, ticks=19270/25578, in_queue=44848, util=85.57% 00:37:44.605 nvme0n2: ios=4659/5096, merge=0/0, ticks=40182/46286, in_queue=86468, util=91.22% 00:37:44.605 nvme0n3: ios=6294/6656, merge=0/0, ticks=50044/46258, in_queue=96302, util=91.18% 00:37:44.605 nvme0n4: ios=5178/5382, merge=0/0, ticks=39170/41837, in_queue=81007, util=94.07% 00:37:44.605 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:37:44.605 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1201188 00:37:44.605 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:44.605 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:37:44.605 [global] 00:37:44.605 thread=1 00:37:44.605 invalidate=1 00:37:44.605 rw=read 00:37:44.605 time_based=1 00:37:44.605 runtime=10 00:37:44.605 ioengine=libaio 00:37:44.605 direct=1 00:37:44.605 bs=4096 00:37:44.605 iodepth=1 00:37:44.605 norandommap=1 00:37:44.605 numjobs=1 00:37:44.605 00:37:44.605 [job0] 00:37:44.605 filename=/dev/nvme0n1 00:37:44.605 [job1] 00:37:44.605 filename=/dev/nvme0n2 00:37:44.605 [job2] 00:37:44.605 filename=/dev/nvme0n3 00:37:44.605 [job3] 00:37:44.605 filename=/dev/nvme0n4 00:37:44.605 Could not set queue depth (nvme0n1) 00:37:44.605 Could not set queue depth (nvme0n2) 00:37:44.605 Could not set queue depth (nvme0n3) 00:37:44.605 Could not set queue depth (nvme0n4) 00:37:44.868 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:44.868 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:44.868 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:44.868 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:44.868 fio-3.35 00:37:44.868 Starting 4 threads 00:37:47.412 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:47.674 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:47.674 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=253952, buflen=4096 00:37:47.674 fio: pid=1201543, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:47.674 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=270336, buflen=4096 00:37:47.674 fio: pid=1201533, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:47.674 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:47.674 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:47.981 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10956800, buflen=4096 00:37:47.981 fio: pid=1201486, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:47.981 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:47.981 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:48.331 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15704064, buflen=4096 00:37:48.331 fio: pid=1201506, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:48.331 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:48.331 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:48.331 00:37:48.331 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1201486: Fri Nov 29 13:21:50 2024 00:37:48.331 read: IOPS=916, BW=3663KiB/s (3751kB/s)(10.4MiB/2921msec) 00:37:48.331 slat (usec): min=7, max=26262, avg=37.76, stdev=519.27 00:37:48.331 clat (usec): min=437, max=4674, avg=1039.45, stdev=150.75 00:37:48.331 lat (usec): min=463, max=27192, avg=1077.21, stdev=541.61 00:37:48.331 clat percentiles (usec): 00:37:48.331 | 1.00th=[ 717], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 930], 00:37:48.331 | 30.00th=[ 971], 40.00th=[ 1004], 50.00th=[ 1045], 60.00th=[ 1074], 00:37:48.331 | 70.00th=[ 1106], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[ 1237], 00:37:48.331 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1729], 99.95th=[ 1811], 00:37:48.331 | 99.99th=[ 4686] 00:37:48.331 bw ( KiB/s): min= 3584, max= 3832, per=43.94%, avg=3740.80, stdev=96.27, samples=5 00:37:48.331 iops : min= 896, max= 958, avg=935.20, stdev=24.07, samples=5 00:37:48.331 lat (usec) : 500=0.04%, 750=1.42%, 1000=37.67% 00:37:48.331 lat (msec) : 2=60.80%, 10=0.04% 00:37:48.331 cpu : usr=0.82%, sys=2.95%, ctx=2678, majf=0, minf=1 00:37:48.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:48.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:48.331 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:48.331 issued rwts: total=2676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:48.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:48.331 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1201506: Fri Nov 29 13:21:50 2024 00:37:48.331 read: IOPS=1229, BW=4917KiB/s (5035kB/s)(15.0MiB/3119msec) 00:37:48.331 slat (usec): min=4, max=27558, avg=25.26, stdev=471.20 00:37:48.331 clat (usec): min=160, max=42007, avg=778.73, stdev=1976.38 00:37:48.331 lat (usec): min=165, max=68891, avg=803.99, stdev=2217.66 00:37:48.331 clat percentiles (usec): 00:37:48.331 | 1.00th=[ 453], 5.00th=[ 523], 10.00th=[ 545], 20.00th=[ 594], 00:37:48.331 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 717], 00:37:48.331 | 70.00th=[ 750], 80.00th=[ 783], 90.00th=[ 816], 95.00th=[ 840], 00:37:48.331 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[41681], 99.95th=[42206], 00:37:48.331 | 99.99th=[42206] 00:37:48.331 bw ( KiB/s): min= 1732, max= 6400, per=59.76%, avg=5087.33, stdev=1725.35, samples=6 00:37:48.331 iops : min= 433, max= 1600, avg=1271.83, stdev=431.34, samples=6 00:37:48.331 lat (usec) : 250=0.05%, 500=2.19%, 750=68.40%, 1000=29.10% 00:37:48.331 lat (msec) : 50=0.23% 00:37:48.331 cpu : usr=0.99%, sys=2.05%, ctx=3837, majf=0, minf=2 00:37:48.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:48.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:48.331 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:48.331 issued rwts: total=3835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:48.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:48.331 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1201533: Fri Nov 29 13:21:50 2024 00:37:48.332 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(264KiB/2738msec) 00:37:48.332 slat (nsec): min=12803, max=96111, avg=27243.67, stdev=8887.66 00:37:48.332 clat (usec): min=1159, max=42214, avg=41115.61, stdev=5011.42 00:37:48.332 lat (usec): min=1255, max=42240, avg=41142.86, stdev=5002.90 00:37:48.332 clat percentiles (usec): 00:37:48.332 | 1.00th=[ 1156], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:48.332 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:37:48.332 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:48.332 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:48.332 | 99.99th=[42206] 00:37:48.332 bw ( KiB/s): min= 96, max= 96, per=1.13%, avg=96.00, stdev= 0.00, samples=5 00:37:48.332 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:37:48.332 lat (msec) : 2=1.49%, 50=97.01% 00:37:48.332 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:37:48.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:48.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:48.332 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:48.332 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:48.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:48.332 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1201543: Fri Nov 29 13:21:50 2024 00:37:48.332 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(248KiB/2585msec) 00:37:48.332 slat (nsec): min=24343, max=81521, avg=26580.98, stdev=7051.73 00:37:48.332 clat (usec): min=1013, max=42079, avg=41304.56, stdev=5201.11 00:37:48.332 lat (usec): min=1094, max=42105, avg=41331.15, stdev=5194.02 00:37:48.332 clat percentiles (usec): 00:37:48.332 | 1.00th=[ 1012], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:37:48.332 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:48.332 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:48.332 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:48.332 | 99.99th=[42206] 00:37:48.332 bw ( KiB/s): min= 96, max= 96, per=1.13%, avg=96.00, stdev= 0.00, samples=5 00:37:48.332 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:37:48.332 lat (msec) : 2=1.59%, 50=96.83% 00:37:48.332 cpu : usr=0.12%, sys=0.00%, ctx=64, majf=0, minf=2 00:37:48.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:48.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:48.332 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:48.332 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:48.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:48.332 00:37:48.332 Run status group 0 (all jobs): 00:37:48.332 READ: bw=8512KiB/s (8716kB/s), 95.9KiB/s-4917KiB/s (98.2kB/s-5035kB/s), io=25.9MiB (27.2MB), run=2585-3119msec 00:37:48.332 00:37:48.332 Disk stats (read/write): 00:37:48.332 nvme0n1: ios=2578/0, merge=0/0, ticks=2549/0, in_queue=2549, util=92.19% 00:37:48.332 nvme0n2: ios=3832/0, merge=0/0, ticks=2853/0, in_queue=2853, util=93.47% 00:37:48.332 nvme0n3: ios=61/0, merge=0/0, ticks=2510/0, in_queue=2510, util=95.60% 00:37:48.332 nvme0n4: ios=61/0, merge=0/0, ticks=2521/0, in_queue=2521, util=96.36% 00:37:48.332 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:48.332 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:48.594 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:48.594 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:48.594 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:48.594 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:48.855 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:48.855 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1201188 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:49.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:49.115 nvmf hotplug test: fio failed as expected 00:37:49.115 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:49.376 13:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:49.376 rmmod nvme_tcp 00:37:49.376 rmmod nvme_fabrics 00:37:49.376 rmmod nvme_keyring 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1198012 ']' 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1198012 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1198012 ']' 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1198012 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:49.376 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198012 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198012' 00:37:49.637 killing process with pid 1198012 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1198012 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1198012 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.637 13:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:52.184 00:37:52.184 real 0m28.171s 00:37:52.184 user 2m14.108s 00:37:52.184 sys 0m12.376s 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:52.184 ************************************ 00:37:52.184 END TEST nvmf_fio_target 00:37:52.184 ************************************ 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:52.184 ************************************ 00:37:52.184 START TEST nvmf_bdevio 00:37:52.184 ************************************ 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:52.184 * Looking for test storage... 00:37:52.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:52.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.184 --rc genhtml_branch_coverage=1 00:37:52.184 --rc genhtml_function_coverage=1 00:37:52.184 --rc genhtml_legend=1 00:37:52.184 --rc geninfo_all_blocks=1 00:37:52.184 --rc geninfo_unexecuted_blocks=1 00:37:52.184 00:37:52.184 ' 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:52.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.184 --rc genhtml_branch_coverage=1 00:37:52.184 --rc genhtml_function_coverage=1 00:37:52.184 --rc genhtml_legend=1 00:37:52.184 --rc geninfo_all_blocks=1 00:37:52.184 --rc geninfo_unexecuted_blocks=1 00:37:52.184 00:37:52.184 ' 00:37:52.184 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:52.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.184 --rc genhtml_branch_coverage=1 00:37:52.184 --rc genhtml_function_coverage=1 00:37:52.184 --rc genhtml_legend=1 00:37:52.185 --rc geninfo_all_blocks=1 00:37:52.185 --rc geninfo_unexecuted_blocks=1 00:37:52.185 00:37:52.185 ' 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:52.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.185 --rc genhtml_branch_coverage=1 00:37:52.185 --rc genhtml_function_coverage=1 00:37:52.185 --rc genhtml_legend=1 00:37:52.185 --rc geninfo_all_blocks=1 00:37:52.185 --rc geninfo_unexecuted_blocks=1 00:37:52.185 00:37:52.185 ' 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:37:52.185 13:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:00.326 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:00.326 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:00.326 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:00.326 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:00.326 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:00.327 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:00.327 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:00.327 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:00.327 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:00.327 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:00.327 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:00.327 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:00.327 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:00.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:00.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:38:00.327 00:38:00.327 --- 10.0.0.2 ping statistics --- 00:38:00.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.327 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:00.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:00.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:38:00.327 00:38:00.327 --- 10.0.0.1 ping statistics --- 00:38:00.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.327 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1206560 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1206560 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1206560 ']' 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:00.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:00.327 [2024-11-29 13:22:02.164899] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:00.327 [2024-11-29 13:22:02.166051] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:38:00.327 [2024-11-29 13:22:02.166101] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:00.327 [2024-11-29 13:22:02.266682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:00.327 [2024-11-29 13:22:02.319870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:00.327 [2024-11-29 13:22:02.319923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:00.327 [2024-11-29 13:22:02.319931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:00.327 [2024-11-29 13:22:02.319939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:00.327 [2024-11-29 13:22:02.319945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:00.327 [2024-11-29 13:22:02.322067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:00.327 [2024-11-29 13:22:02.322232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:00.327 [2024-11-29 13:22:02.322453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:00.327 [2024-11-29 13:22:02.322455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:00.327 [2024-11-29 13:22:02.400756] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:00.327 [2024-11-29 13:22:02.401768] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:00.327 [2024-11-29 13:22:02.401990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:00.327 [2024-11-29 13:22:02.402399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:00.327 [2024-11-29 13:22:02.402440] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:00.327 13:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:00.588 [2024-11-29 13:22:03.027477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:00.588 Malloc0 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:00.588 [2024-11-29 13:22:03.123647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:00.588 { 00:38:00.588 "params": { 00:38:00.588 "name": "Nvme$subsystem", 00:38:00.588 "trtype": "$TEST_TRANSPORT", 00:38:00.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:00.588 "adrfam": "ipv4", 00:38:00.588 "trsvcid": "$NVMF_PORT", 00:38:00.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:00.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:00.588 "hdgst": ${hdgst:-false}, 00:38:00.588 "ddgst": ${ddgst:-false} 00:38:00.588 }, 00:38:00.588 "method": "bdev_nvme_attach_controller" 00:38:00.588 } 00:38:00.588 EOF 00:38:00.588 )") 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:38:00.588 13:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:00.588 "params": { 00:38:00.588 "name": "Nvme1", 00:38:00.588 "trtype": "tcp", 00:38:00.588 "traddr": "10.0.0.2", 00:38:00.588 "adrfam": "ipv4", 00:38:00.588 "trsvcid": "4420", 00:38:00.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:00.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:00.588 "hdgst": false, 00:38:00.588 "ddgst": false 00:38:00.588 }, 00:38:00.588 "method": "bdev_nvme_attach_controller" 00:38:00.588 }' 00:38:00.588 [2024-11-29 13:22:03.182194] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:38:00.588 [2024-11-29 13:22:03.182270] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206761 ] 00:38:00.849 [2024-11-29 13:22:03.275240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:00.849 [2024-11-29 13:22:03.332085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:00.850 [2024-11-29 13:22:03.332250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.850 [2024-11-29 13:22:03.332250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:01.110 I/O targets: 00:38:01.110 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:38:01.110 00:38:01.110 00:38:01.110 CUnit - A unit testing framework for C - Version 2.1-3 00:38:01.110 http://cunit.sourceforge.net/ 00:38:01.110 00:38:01.110 00:38:01.110 Suite: bdevio tests on: Nvme1n1 00:38:01.110 Test: blockdev write read block ...passed 00:38:01.110 Test: blockdev write zeroes read block ...passed 00:38:01.110 Test: blockdev write zeroes read no split ...passed 00:38:01.110 Test: blockdev write zeroes read split ...passed 00:38:01.110 Test: blockdev write zeroes read split partial ...passed 00:38:01.110 Test: blockdev reset ...[2024-11-29 13:22:03.745971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:38:01.110 [2024-11-29 13:22:03.746058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bf970 (9): Bad file descriptor 00:38:01.371 [2024-11-29 13:22:03.794233] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:38:01.371 passed 00:38:01.371 Test: blockdev write read 8 blocks ...passed 00:38:01.371 Test: blockdev write read size > 128k ...passed 00:38:01.371 Test: blockdev write read invalid size ...passed 00:38:01.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:01.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:01.371 Test: blockdev write read max offset ...passed 00:38:01.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:01.371 Test: blockdev writev readv 8 blocks ...passed 00:38:01.371 Test: blockdev writev readv 30 x 1block ...passed 00:38:01.632 Test: blockdev writev readv block ...passed 00:38:01.632 Test: blockdev writev readv size > 128k ...passed 00:38:01.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:01.632 Test: blockdev comparev and writev ...[2024-11-29 13:22:04.059494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:01.632 [2024-11-29 13:22:04.059545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:01.632 [2024-11-29 13:22:04.059563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:01.632 [2024-11-29 13:22:04.059573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:01.633 [2024-11-29 13:22:04.060199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:01.633 [2024-11-29 13:22:04.060213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:01.633 [2024-11-29 13:22:04.060229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:01.633 [2024-11-29 13:22:04.060239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:01.633 [2024-11-29 13:22:04.060849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:01.633 [2024-11-29 13:22:04.060861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:01.633 [2024-11-29 13:22:04.060877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:01.633 [2024-11-29 13:22:04.060886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:01.633 [2024-11-29 13:22:04.061509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:01.633 [2024-11-29 13:22:04.061522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:01.633 [2024-11-29 13:22:04.061536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:01.633 [2024-11-29 13:22:04.061546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:01.633 passed 00:38:01.633 Test: blockdev nvme passthru rw ...passed 00:38:01.633 Test: blockdev nvme passthru vendor specific ...[2024-11-29 13:22:04.146089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:01.633 [2024-11-29 13:22:04.146106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:01.633 [2024-11-29 13:22:04.146494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:01.633 [2024-11-29 13:22:04.146507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:01.633 [2024-11-29 13:22:04.146891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:01.633 [2024-11-29 13:22:04.146901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:01.633 [2024-11-29 13:22:04.147298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:01.633 [2024-11-29 13:22:04.147312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:01.633 passed 00:38:01.633 Test: blockdev nvme admin passthru ...passed 00:38:01.633 Test: blockdev copy ...passed 00:38:01.633 00:38:01.633 Run Summary: Type Total Ran Passed Failed Inactive 00:38:01.633 suites 1 1 n/a 0 0 00:38:01.633 tests 23 23 23 0 0 00:38:01.633 asserts 152 152 152 0 n/a 00:38:01.633 00:38:01.633 Elapsed time = 1.192 seconds 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:01.894 rmmod nvme_tcp 00:38:01.894 rmmod nvme_fabrics 00:38:01.894 rmmod nvme_keyring 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1206560 ']' 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1206560 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1206560 ']' 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1206560 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1206560 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1206560' 00:38:01.894 killing process with pid 1206560 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1206560 00:38:01.894 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1206560 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:02.155 13:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.071 13:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:04.333 00:38:04.333 real 0m12.383s 00:38:04.333 user 0m10.308s 00:38:04.333 sys 0m6.511s 00:38:04.333 13:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:04.333 13:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:04.333 ************************************ 00:38:04.333 END TEST nvmf_bdevio 00:38:04.333 ************************************ 00:38:04.333 13:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:04.333 00:38:04.333 real 5m0.037s 00:38:04.333 user 10m18.348s 00:38:04.333 sys 2m5.049s 00:38:04.333 13:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:04.333 13:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:04.333 ************************************ 00:38:04.333 END TEST nvmf_target_core_interrupt_mode 00:38:04.333 ************************************ 00:38:04.333 13:22:06 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:04.333 13:22:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:04.333 13:22:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:04.333 13:22:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:04.333 ************************************ 00:38:04.333 START TEST nvmf_interrupt 00:38:04.333 ************************************ 00:38:04.333 13:22:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:04.333 * Looking for test storage... 00:38:04.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:04.333 13:22:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:04.333 13:22:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:38:04.333 13:22:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:04.596 13:22:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:04.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.597 --rc genhtml_branch_coverage=1 00:38:04.597 --rc genhtml_function_coverage=1 00:38:04.597 --rc genhtml_legend=1 00:38:04.597 --rc geninfo_all_blocks=1 00:38:04.597 --rc geninfo_unexecuted_blocks=1 00:38:04.597 00:38:04.597 ' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:04.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.597 --rc genhtml_branch_coverage=1 00:38:04.597 --rc genhtml_function_coverage=1 00:38:04.597 --rc genhtml_legend=1 00:38:04.597 --rc geninfo_all_blocks=1 00:38:04.597 --rc geninfo_unexecuted_blocks=1 00:38:04.597 00:38:04.597 ' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:04.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.597 --rc genhtml_branch_coverage=1 00:38:04.597 --rc genhtml_function_coverage=1 00:38:04.597 --rc genhtml_legend=1 00:38:04.597 --rc geninfo_all_blocks=1 00:38:04.597 --rc geninfo_unexecuted_blocks=1 00:38:04.597 00:38:04.597 ' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:04.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.597 --rc genhtml_branch_coverage=1 00:38:04.597 --rc genhtml_function_coverage=1 00:38:04.597 --rc genhtml_legend=1 00:38:04.597 --rc geninfo_all_blocks=1 00:38:04.597 --rc geninfo_unexecuted_blocks=1 00:38:04.597 00:38:04.597 ' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:38:04.597 13:22:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:12.789 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:12.789 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:38:12.789 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:12.789 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:12.790 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:12.790 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:12.790 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:12.790 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:12.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:12.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:38:12.790 00:38:12.790 --- 10.0.0.2 ping statistics --- 00:38:12.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.790 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:12.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:12.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:38:12.790 00:38:12.790 --- 10.0.0.1 ping statistics --- 00:38:12.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.790 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:38:12.790 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1211132 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1211132 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1211132 ']' 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:12.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.791 13:22:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:12.791 [2024-11-29 13:22:14.717648] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:12.791 [2024-11-29 13:22:14.718766] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:38:12.791 [2024-11-29 13:22:14.718819] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:12.791 [2024-11-29 13:22:14.819793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:12.791 [2024-11-29 13:22:14.871437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:12.791 [2024-11-29 13:22:14.871489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:12.791 [2024-11-29 13:22:14.871498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:12.791 [2024-11-29 13:22:14.871505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:12.791 [2024-11-29 13:22:14.871512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:12.791 [2024-11-29 13:22:14.873258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:12.791 [2024-11-29 13:22:14.873301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.791 [2024-11-29 13:22:14.951858] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:12.791 [2024-11-29 13:22:14.952475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:12.791 [2024-11-29 13:22:14.952769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:38:13.052 5000+0 records in 00:38:13.052 5000+0 records out 00:38:13.052 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0191429 s, 535 MB/s 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:13.052 AIO0 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:13.052 [2024-11-29 13:22:15.646359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:13.052 [2024-11-29 13:22:15.690939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1211132 0 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1211132 0 idle 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1211132 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1211132 -w 256 00:38:13.052 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1211132 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.31 reactor_0' 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1211132 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.31 reactor_0 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1211132 1 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1211132 1 idle 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1211132 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1211132 -w 256 00:38:13.313 13:22:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1211172 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1211172 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1211485 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1211132 0 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1211132 0 busy 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1211132 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1211132 -w 256 00:38:13.574 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1211132 root 20 0 128.2g 44928 32256 R 73.3 0.0 0:00.43 reactor_0' 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1211132 root 20 0 128.2g 44928 32256 R 73.3 0.0 0:00.43 reactor_0 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1211132 1 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1211132 1 busy 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1211132 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1211132 -w 256 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1211172 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1' 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1211172 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:13.835 13:22:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1211485 00:38:23.840 Initializing NVMe Controllers 00:38:23.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:23.840 Controller IO queue size 256, less than required. 00:38:23.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:23.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:23.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:23.840 Initialization complete. Launching workers. 00:38:23.840 ======================================================== 00:38:23.840 Latency(us) 00:38:23.840 Device Information : IOPS MiB/s Average min max 00:38:23.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18951.89 74.03 13512.42 4006.64 33516.14 00:38:23.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19837.49 77.49 12906.15 7870.79 29046.53 00:38:23.840 ======================================================== 00:38:23.840 Total : 38789.39 151.52 13202.36 4006.64 33516.14 00:38:23.840 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1211132 0 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1211132 0 idle 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1211132 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1211132 -w 256 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1211132 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.31 reactor_0' 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1211132 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.31 reactor_0 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1211132 1 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1211132 1 idle 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1211132 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:23.840 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:23.841 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:23.841 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:23.841 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:23.841 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:23.841 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:23.841 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:23.841 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1211132 -w 256 00:38:23.841 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1211172 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1211172 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:24.103 13:22:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:24.675 13:22:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:38:24.675 13:22:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:38:24.675 13:22:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:38:24.675 13:22:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:38:24.675 13:22:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1211132 0 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1211132 0 idle 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1211132 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1211132 -w 256 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1211132 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.69 reactor_0' 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1211132 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.69 reactor_0 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1211132 1 00:38:27.218 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1211132 1 idle 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1211132 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1211132 -w 256 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1211172 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1211172 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:27.219 13:22:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:27.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.481 13:22:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.481 rmmod nvme_tcp 00:38:27.481 rmmod nvme_fabrics 00:38:27.481 rmmod nvme_keyring 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1211132 ']' 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1211132 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1211132 ']' 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1211132 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1211132 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1211132' 00:38:27.481 killing process with pid 1211132 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1211132 00:38:27.481 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1211132 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:27.743 13:22:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.288 13:22:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:30.288 00:38:30.288 real 0m25.472s 00:38:30.288 user 0m40.343s 00:38:30.288 sys 0m9.732s 00:38:30.288 13:22:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.288 13:22:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:30.288 ************************************ 00:38:30.288 END TEST nvmf_interrupt 00:38:30.288 ************************************ 00:38:30.288 00:38:30.288 real 30m8.124s 00:38:30.288 user 61m28.945s 00:38:30.288 sys 10m19.906s 00:38:30.288 13:22:32 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.288 13:22:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:30.288 ************************************ 00:38:30.288 END TEST nvmf_tcp 00:38:30.288 ************************************ 00:38:30.288 13:22:32 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:38:30.288 13:22:32 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:30.288 13:22:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:30.288 13:22:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.288 13:22:32 -- common/autotest_common.sh@10 -- # set +x 00:38:30.288 ************************************ 00:38:30.288 START TEST spdkcli_nvmf_tcp 00:38:30.288 ************************************ 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:30.288 * Looking for test storage... 00:38:30.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:30.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.288 --rc genhtml_branch_coverage=1 00:38:30.288 --rc genhtml_function_coverage=1 00:38:30.288 --rc genhtml_legend=1 00:38:30.288 --rc geninfo_all_blocks=1 00:38:30.288 --rc geninfo_unexecuted_blocks=1 00:38:30.288 00:38:30.288 ' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:30.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.288 --rc genhtml_branch_coverage=1 00:38:30.288 --rc genhtml_function_coverage=1 00:38:30.288 --rc genhtml_legend=1 00:38:30.288 --rc geninfo_all_blocks=1 00:38:30.288 --rc geninfo_unexecuted_blocks=1 00:38:30.288 00:38:30.288 ' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:30.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.288 --rc genhtml_branch_coverage=1 00:38:30.288 --rc genhtml_function_coverage=1 00:38:30.288 --rc genhtml_legend=1 00:38:30.288 --rc geninfo_all_blocks=1 00:38:30.288 --rc geninfo_unexecuted_blocks=1 00:38:30.288 00:38:30.288 ' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:30.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.288 --rc genhtml_branch_coverage=1 00:38:30.288 --rc genhtml_function_coverage=1 00:38:30.288 --rc genhtml_legend=1 00:38:30.288 --rc geninfo_all_blocks=1 00:38:30.288 --rc geninfo_unexecuted_blocks=1 00:38:30.288 00:38:30.288 ' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:30.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:30.288 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1214680 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1214680 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1214680 ']' 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:30.289 13:22:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:30.289 [2024-11-29 13:22:32.778630] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:38:30.289 [2024-11-29 13:22:32.778707] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214680 ] 00:38:30.289 [2024-11-29 13:22:32.871052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:30.289 [2024-11-29 13:22:32.924999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:30.289 [2024-11-29 13:22:32.925004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:31.233 13:22:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:31.233 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:31.233 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:31.233 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:31.233 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:31.233 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:31.233 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:31.233 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:31.233 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:31.233 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:31.233 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:31.233 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:31.233 ' 00:38:33.787 [2024-11-29 13:22:36.394623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:35.171 [2024-11-29 13:22:37.750733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:37.720 [2024-11-29 13:22:40.285895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:40.267 [2024-11-29 13:22:42.504097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:41.653 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:41.653 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:41.653 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:41.653 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:41.653 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:41.653 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:41.653 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:41.653 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:41.653 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:41.653 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:41.653 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:41.653 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:41.653 13:22:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:41.653 13:22:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:41.653 13:22:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:41.653 13:22:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:41.653 13:22:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:41.653 13:22:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:41.653 13:22:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:41.914 13:22:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:42.175 13:22:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:42.175 13:22:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:42.175 13:22:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:42.175 13:22:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:42.175 13:22:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:42.175 13:22:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:42.175 13:22:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:42.175 13:22:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:42.175 13:22:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:42.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:42.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:42.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:42.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:42.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:42.175 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:42.175 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:42.175 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:42.175 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:42.175 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:42.175 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:42.175 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:42.175 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:42.175 ' 00:38:48.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:48.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:48.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:48.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:48.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:48.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:48.767 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:48.767 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:48.767 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:48.767 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:48.767 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:48.767 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:48.767 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:48.767 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:48.767 13:22:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:48.767 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.767 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:48.767 13:22:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1214680 00:38:48.767 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1214680 ']' 00:38:48.767 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1214680 00:38:48.767 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:38:48.767 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:48.767 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1214680 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1214680' 00:38:48.768 killing process with pid 1214680 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1214680 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1214680 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1214680 ']' 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1214680 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1214680 ']' 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1214680 00:38:48.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1214680) - No such process 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1214680 is not found' 00:38:48.768 Process with pid 1214680 is not found 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:48.768 00:38:48.768 real 0m18.208s 00:38:48.768 user 0m40.378s 00:38:48.768 sys 0m0.960s 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:48.768 13:22:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:48.768 ************************************ 00:38:48.768 END TEST spdkcli_nvmf_tcp 00:38:48.768 ************************************ 00:38:48.768 13:22:50 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:48.768 13:22:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:48.768 13:22:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.768 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:38:48.768 ************************************ 00:38:48.768 START TEST nvmf_identify_passthru 00:38:48.768 ************************************ 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:48.768 * Looking for test storage... 00:38:48.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:48.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.768 --rc genhtml_branch_coverage=1 00:38:48.768 --rc genhtml_function_coverage=1 00:38:48.768 --rc genhtml_legend=1 00:38:48.768 --rc geninfo_all_blocks=1 00:38:48.768 --rc geninfo_unexecuted_blocks=1 00:38:48.768 00:38:48.768 ' 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:48.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.768 --rc genhtml_branch_coverage=1 00:38:48.768 --rc genhtml_function_coverage=1 00:38:48.768 --rc genhtml_legend=1 00:38:48.768 --rc geninfo_all_blocks=1 00:38:48.768 --rc geninfo_unexecuted_blocks=1 00:38:48.768 00:38:48.768 ' 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:48.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.768 --rc genhtml_branch_coverage=1 00:38:48.768 --rc genhtml_function_coverage=1 00:38:48.768 --rc genhtml_legend=1 00:38:48.768 --rc geninfo_all_blocks=1 00:38:48.768 --rc geninfo_unexecuted_blocks=1 00:38:48.768 00:38:48.768 ' 00:38:48.768 13:22:50 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:48.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:48.768 --rc genhtml_branch_coverage=1 00:38:48.768 --rc genhtml_function_coverage=1 00:38:48.768 --rc genhtml_legend=1 00:38:48.768 --rc geninfo_all_blocks=1 00:38:48.768 --rc geninfo_unexecuted_blocks=1 00:38:48.768 00:38:48.768 ' 00:38:48.768 13:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:48.768 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.768 13:22:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.768 13:22:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.768 13:22:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.768 13:22:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.768 13:22:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:48.768 13:22:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:48.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:48.769 13:22:50 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:48.769 13:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:48.769 13:22:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:48.769 13:22:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.769 13:22:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.769 13:22:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.769 13:22:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.769 13:22:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.769 13:22:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.769 13:22:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:48.769 13:22:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.769 13:22:51 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:48.769 13:22:51 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:48.769 13:22:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:48.769 13:22:51 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:48.769 13:22:51 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:48.769 13:22:51 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:48.769 13:22:51 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.769 13:22:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:48.769 13:22:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.769 13:22:51 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:48.769 13:22:51 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:48.769 13:22:51 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:38:48.769 13:22:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:56.915 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:56.915 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:56.915 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:56.915 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:56.915 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:56.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:56.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:38:56.916 00:38:56.916 --- 10.0.0.2 ping statistics --- 00:38:56.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:56.916 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:56.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:56.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:38:56.916 00:38:56.916 --- 10.0.0.1 ping statistics --- 00:38:56.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:56.916 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:56.916 13:22:58 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:56.916 13:22:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:56.916 13:22:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:38:56.916 13:22:58 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:38:56.916 13:22:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:38:56.916 13:22:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:38:56.916 13:22:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:56.916 13:22:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:56.916 13:22:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:56.916 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:38:56.916 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:56.916 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:56.916 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:57.177 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:38:57.177 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:57.177 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:57.177 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:57.177 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:57.177 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:57.177 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:57.177 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1222072 00:38:57.177 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:57.177 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:57.177 13:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1222072 00:38:57.177 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1222072 ']' 00:38:57.177 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:57.178 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:57.178 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:57.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:57.178 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:57.178 13:22:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:57.178 [2024-11-29 13:22:59.770097] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:38:57.178 [2024-11-29 13:22:59.770173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:57.440 [2024-11-29 13:22:59.869072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:57.440 [2024-11-29 13:22:59.923098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:57.440 [2024-11-29 13:22:59.923154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:57.440 [2024-11-29 13:22:59.923172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:57.440 [2024-11-29 13:22:59.923179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:57.440 [2024-11-29 13:22:59.923186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:57.440 [2024-11-29 13:22:59.925220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.440 [2024-11-29 13:22:59.925335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:57.440 [2024-11-29 13:22:59.925511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:57.440 [2024-11-29 13:22:59.925511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.015 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:58.015 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:38:58.015 13:23:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:58.015 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.015 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.015 INFO: Log level set to 20 00:38:58.015 INFO: Requests: 00:38:58.015 { 00:38:58.015 "jsonrpc": "2.0", 00:38:58.015 "method": "nvmf_set_config", 00:38:58.015 "id": 1, 00:38:58.015 "params": { 00:38:58.015 "admin_cmd_passthru": { 00:38:58.015 "identify_ctrlr": true 00:38:58.015 } 00:38:58.015 } 00:38:58.015 } 00:38:58.015 00:38:58.015 INFO: response: 00:38:58.015 { 00:38:58.015 "jsonrpc": "2.0", 00:38:58.015 "id": 1, 00:38:58.015 "result": true 00:38:58.015 } 00:38:58.015 00:38:58.015 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.015 13:23:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:58.015 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.015 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.015 INFO: Setting log level to 20 00:38:58.015 INFO: Setting log level to 20 00:38:58.015 INFO: Log level set to 20 00:38:58.015 INFO: Log level set to 20 00:38:58.015 INFO: Requests: 00:38:58.015 { 00:38:58.015 "jsonrpc": "2.0", 00:38:58.015 "method": "framework_start_init", 00:38:58.015 "id": 1 00:38:58.015 } 00:38:58.015 00:38:58.015 INFO: Requests: 00:38:58.015 { 00:38:58.015 "jsonrpc": "2.0", 00:38:58.015 "method": "framework_start_init", 00:38:58.015 "id": 1 00:38:58.015 } 00:38:58.015 00:38:58.276 [2024-11-29 13:23:00.701611] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:58.276 INFO: response: 00:38:58.276 { 00:38:58.276 "jsonrpc": "2.0", 00:38:58.276 "id": 1, 00:38:58.276 "result": true 00:38:58.276 } 00:38:58.276 00:38:58.276 INFO: response: 00:38:58.276 { 00:38:58.276 "jsonrpc": "2.0", 00:38:58.276 "id": 1, 00:38:58.276 "result": true 00:38:58.276 } 00:38:58.276 00:38:58.276 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.276 13:23:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:58.276 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.276 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.276 INFO: Setting log level to 40 00:38:58.276 INFO: Setting log level to 40 00:38:58.276 INFO: Setting log level to 40 00:38:58.276 [2024-11-29 13:23:00.715171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.276 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.276 13:23:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:58.276 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:58.276 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.276 13:23:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:38:58.276 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.276 13:23:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.538 Nvme0n1 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.538 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.538 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.538 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.538 [2024-11-29 13:23:01.113052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.538 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.538 [ 00:38:58.538 { 00:38:58.538 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:58.538 "subtype": "Discovery", 00:38:58.538 "listen_addresses": [], 00:38:58.538 "allow_any_host": true, 00:38:58.538 "hosts": [] 00:38:58.538 }, 00:38:58.538 { 00:38:58.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:58.538 "subtype": "NVMe", 00:38:58.538 "listen_addresses": [ 00:38:58.538 { 00:38:58.538 "trtype": "TCP", 00:38:58.538 "adrfam": "IPv4", 00:38:58.538 "traddr": "10.0.0.2", 00:38:58.538 "trsvcid": "4420" 00:38:58.538 } 00:38:58.538 ], 00:38:58.538 "allow_any_host": true, 00:38:58.538 "hosts": [], 00:38:58.538 "serial_number": "SPDK00000000000001", 00:38:58.538 "model_number": "SPDK bdev Controller", 00:38:58.538 "max_namespaces": 1, 00:38:58.538 "min_cntlid": 1, 00:38:58.538 "max_cntlid": 65519, 00:38:58.538 "namespaces": [ 00:38:58.538 { 00:38:58.538 "nsid": 1, 00:38:58.538 "bdev_name": "Nvme0n1", 00:38:58.538 "name": "Nvme0n1", 00:38:58.538 "nguid": "36344730526054870025384500000044", 00:38:58.538 "uuid": "36344730-5260-5487-0025-384500000044" 00:38:58.538 } 00:38:58.538 ] 00:38:58.538 } 00:38:58.538 ] 00:38:58.538 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.538 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:58.538 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:58.538 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:58.884 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.884 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:58.884 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:58.884 13:23:01 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:58.884 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:58.884 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:38:58.884 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:58.884 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:38:58.884 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:58.884 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:58.884 rmmod nvme_tcp 00:38:58.884 rmmod nvme_fabrics 00:38:58.884 rmmod nvme_keyring 00:38:58.884 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:58.884 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:38:58.884 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:38:59.190 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1222072 ']' 00:38:59.190 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1222072 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1222072 ']' 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1222072 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1222072 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1222072' 00:38:59.190 killing process with pid 1222072 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1222072 00:38:59.190 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1222072 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:59.451 13:23:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.451 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:59.451 13:23:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.365 13:23:03 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:01.365 00:39:01.365 real 0m13.213s 00:39:01.365 user 0m10.165s 00:39:01.365 sys 0m6.742s 00:39:01.365 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:01.365 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:01.365 ************************************ 00:39:01.365 END TEST nvmf_identify_passthru 00:39:01.365 ************************************ 00:39:01.365 13:23:04 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:01.365 13:23:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:01.365 13:23:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:01.365 13:23:04 -- common/autotest_common.sh@10 -- # set +x 00:39:01.627 ************************************ 00:39:01.627 START TEST nvmf_dif 00:39:01.627 ************************************ 00:39:01.627 13:23:04 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:01.627 * Looking for test storage... 00:39:01.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:01.627 13:23:04 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:01.627 13:23:04 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:39:01.627 13:23:04 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:01.627 13:23:04 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:01.627 13:23:04 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:39:01.627 13:23:04 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:01.627 13:23:04 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.627 --rc genhtml_branch_coverage=1 00:39:01.627 --rc genhtml_function_coverage=1 00:39:01.627 --rc genhtml_legend=1 00:39:01.627 --rc geninfo_all_blocks=1 00:39:01.627 --rc geninfo_unexecuted_blocks=1 00:39:01.627 00:39:01.627 ' 00:39:01.627 13:23:04 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.627 --rc genhtml_branch_coverage=1 00:39:01.627 --rc genhtml_function_coverage=1 00:39:01.627 --rc genhtml_legend=1 00:39:01.627 --rc geninfo_all_blocks=1 00:39:01.627 --rc geninfo_unexecuted_blocks=1 00:39:01.627 00:39:01.627 ' 00:39:01.627 13:23:04 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.627 --rc genhtml_branch_coverage=1 00:39:01.627 --rc genhtml_function_coverage=1 00:39:01.627 --rc genhtml_legend=1 00:39:01.627 --rc geninfo_all_blocks=1 00:39:01.627 --rc geninfo_unexecuted_blocks=1 00:39:01.627 00:39:01.627 ' 00:39:01.628 13:23:04 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:01.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.628 --rc genhtml_branch_coverage=1 00:39:01.628 --rc genhtml_function_coverage=1 00:39:01.628 --rc genhtml_legend=1 00:39:01.628 --rc geninfo_all_blocks=1 00:39:01.628 --rc geninfo_unexecuted_blocks=1 00:39:01.628 00:39:01.628 ' 00:39:01.628 13:23:04 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:01.628 13:23:04 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:39:01.628 13:23:04 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:01.628 13:23:04 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:01.628 13:23:04 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:01.628 13:23:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.628 13:23:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.628 13:23:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.628 13:23:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:01.628 13:23:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:01.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:01.628 13:23:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:01.628 13:23:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:01.628 13:23:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:01.628 13:23:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:01.628 13:23:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:01.628 13:23:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.628 13:23:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:01.628 13:23:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.889 13:23:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:01.889 13:23:04 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:01.889 13:23:04 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:39:01.889 13:23:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:10.035 13:23:11 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.035 13:23:11 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:10.036 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:10.036 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:10.036 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:10.036 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:10.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:10.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:39:10.036 00:39:10.036 --- 10.0.0.2 ping statistics --- 00:39:10.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.036 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:10.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:10.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:39:10.036 00:39:10.036 --- 10.0.0.1 ping statistics --- 00:39:10.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.036 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:39:10.036 13:23:11 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:10.037 13:23:11 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:12.582 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:39:12.582 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:12.582 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:12.840 13:23:15 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:12.840 13:23:15 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:12.840 13:23:15 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:12.840 13:23:15 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:12.840 13:23:15 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:12.840 13:23:15 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:13.100 13:23:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:13.100 13:23:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:13.100 13:23:15 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:13.100 13:23:15 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:13.100 13:23:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:13.100 13:23:15 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1228252 00:39:13.100 13:23:15 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1228252 00:39:13.100 13:23:15 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:13.100 13:23:15 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1228252 ']' 00:39:13.100 13:23:15 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:13.100 13:23:15 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:13.100 13:23:15 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:13.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:13.100 13:23:15 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:13.100 13:23:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:13.100 [2024-11-29 13:23:15.588986] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:39:13.100 [2024-11-29 13:23:15.589032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:13.100 [2024-11-29 13:23:15.683000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.100 [2024-11-29 13:23:15.718376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:13.100 [2024-11-29 13:23:15.718409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:13.100 [2024-11-29 13:23:15.718417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:13.100 [2024-11-29 13:23:15.718423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:13.100 [2024-11-29 13:23:15.718429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:13.100 [2024-11-29 13:23:15.718973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:39:14.042 13:23:16 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:14.042 13:23:16 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:14.042 13:23:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:14.042 13:23:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:14.042 [2024-11-29 13:23:16.416373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.042 13:23:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:14.042 13:23:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:14.042 ************************************ 00:39:14.042 START TEST fio_dif_1_default 00:39:14.042 ************************************ 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:14.042 bdev_null0 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:14.042 [2024-11-29 13:23:16.504837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:14.042 { 00:39:14.042 "params": { 00:39:14.042 "name": "Nvme$subsystem", 00:39:14.042 "trtype": "$TEST_TRANSPORT", 00:39:14.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.042 "adrfam": "ipv4", 00:39:14.042 "trsvcid": "$NVMF_PORT", 00:39:14.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.042 "hdgst": ${hdgst:-false}, 00:39:14.042 "ddgst": ${ddgst:-false} 00:39:14.042 }, 00:39:14.042 "method": "bdev_nvme_attach_controller" 00:39:14.042 } 00:39:14.042 EOF 00:39:14.042 )") 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:14.042 "params": { 00:39:14.042 "name": "Nvme0", 00:39:14.042 "trtype": "tcp", 00:39:14.042 "traddr": "10.0.0.2", 00:39:14.042 "adrfam": "ipv4", 00:39:14.042 "trsvcid": "4420", 00:39:14.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:14.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:14.042 "hdgst": false, 00:39:14.042 "ddgst": false 00:39:14.042 }, 00:39:14.042 "method": "bdev_nvme_attach_controller" 00:39:14.042 }' 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:14.042 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:14.043 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:14.043 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:14.043 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:14.043 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:14.043 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:14.043 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:14.043 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:14.043 13:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:14.303 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:14.303 fio-3.35 00:39:14.303 Starting 1 thread 00:39:26.532 00:39:26.532 filename0: (groupid=0, jobs=1): err= 0: pid=1228780: Fri Nov 29 13:23:27 2024 00:39:26.532 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 00:39:26.532 slat (nsec): min=5538, max=33558, avg=6369.91, stdev=1619.78 00:39:26.532 clat (usec): min=40856, max=42914, avg=41064.59, stdev=286.22 00:39:26.532 lat (usec): min=40864, max=42947, avg=41070.96, stdev=287.00 00:39:26.532 clat percentiles (usec): 00:39:26.532 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:26.532 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:26.532 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:39:26.532 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:39:26.532 | 99.99th=[42730] 00:39:26.532 bw ( KiB/s): min= 384, max= 416, per=99.62%, avg=388.80, stdev=11.72, samples=20 00:39:26.532 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:39:26.532 lat (msec) : 50=100.00% 00:39:26.532 cpu : usr=92.88%, sys=6.91%, ctx=7, majf=0, minf=217 00:39:26.532 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:26.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.532 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:26.532 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:26.532 00:39:26.532 Run status group 0 (all jobs): 00:39:26.532 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10024-10024msec 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.532 00:39:26.532 real 0m11.202s 00:39:26.532 user 0m17.768s 00:39:26.532 sys 0m1.124s 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:26.532 ************************************ 00:39:26.532 END TEST fio_dif_1_default 00:39:26.532 ************************************ 00:39:26.532 13:23:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:26.532 13:23:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:26.532 13:23:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:26.532 13:23:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:26.532 ************************************ 00:39:26.532 START TEST fio_dif_1_multi_subsystems 00:39:26.532 ************************************ 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.532 bdev_null0 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.532 [2024-11-29 13:23:27.785823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.532 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.533 bdev_null1 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:26.533 { 00:39:26.533 "params": { 00:39:26.533 "name": "Nvme$subsystem", 00:39:26.533 "trtype": "$TEST_TRANSPORT", 00:39:26.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:26.533 "adrfam": "ipv4", 00:39:26.533 "trsvcid": "$NVMF_PORT", 00:39:26.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:26.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:26.533 "hdgst": ${hdgst:-false}, 00:39:26.533 "ddgst": ${ddgst:-false} 00:39:26.533 }, 00:39:26.533 "method": "bdev_nvme_attach_controller" 00:39:26.533 } 00:39:26.533 EOF 00:39:26.533 )") 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:26.533 { 00:39:26.533 "params": { 00:39:26.533 "name": "Nvme$subsystem", 00:39:26.533 "trtype": "$TEST_TRANSPORT", 00:39:26.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:26.533 "adrfam": "ipv4", 00:39:26.533 "trsvcid": "$NVMF_PORT", 00:39:26.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:26.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:26.533 "hdgst": ${hdgst:-false}, 00:39:26.533 "ddgst": ${ddgst:-false} 00:39:26.533 }, 00:39:26.533 "method": "bdev_nvme_attach_controller" 00:39:26.533 } 00:39:26.533 EOF 00:39:26.533 )") 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:26.533 "params": { 00:39:26.533 "name": "Nvme0", 00:39:26.533 "trtype": "tcp", 00:39:26.533 "traddr": "10.0.0.2", 00:39:26.533 "adrfam": "ipv4", 00:39:26.533 "trsvcid": "4420", 00:39:26.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:26.533 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:26.533 "hdgst": false, 00:39:26.533 "ddgst": false 00:39:26.533 }, 00:39:26.533 "method": "bdev_nvme_attach_controller" 00:39:26.533 },{ 00:39:26.533 "params": { 00:39:26.533 "name": "Nvme1", 00:39:26.533 "trtype": "tcp", 00:39:26.533 "traddr": "10.0.0.2", 00:39:26.533 "adrfam": "ipv4", 00:39:26.533 "trsvcid": "4420", 00:39:26.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:26.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:26.533 "hdgst": false, 00:39:26.533 "ddgst": false 00:39:26.533 }, 00:39:26.533 "method": "bdev_nvme_attach_controller" 00:39:26.533 }' 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:26.533 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:26.534 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:26.534 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:26.534 13:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:26.534 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:26.534 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:26.534 fio-3.35 00:39:26.534 Starting 2 threads 00:39:36.535 00:39:36.535 filename0: (groupid=0, jobs=1): err= 0: pid=1230979: Fri Nov 29 13:23:38 2024 00:39:36.535 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10006msec) 00:39:36.535 slat (nsec): min=5540, max=32386, avg=6419.43, stdev=1570.09 00:39:36.535 clat (usec): min=40848, max=42141, avg=40987.33, stdev=79.99 00:39:36.535 lat (usec): min=40856, max=42174, avg=40993.75, stdev=80.59 00:39:36.535 clat percentiles (usec): 00:39:36.535 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:36.535 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:36.535 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:36.535 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:39:36.535 | 99.99th=[42206] 00:39:36.535 bw ( KiB/s): min= 384, max= 416, per=49.73%, avg=388.80, stdev=11.72, samples=20 00:39:36.535 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:39:36.535 lat (msec) : 50=100.00% 00:39:36.535 cpu : usr=95.19%, sys=4.60%, ctx=9, majf=0, minf=165 00:39:36.535 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:36.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.535 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.535 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:36.535 filename1: (groupid=0, jobs=1): err= 0: pid=1230980: Fri Nov 29 13:23:38 2024 00:39:36.535 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:39:36.535 slat (nsec): min=5538, max=33850, avg=6503.71, stdev=1515.40 00:39:36.535 clat (usec): min=40877, max=42054, avg=40991.06, stdev=101.13 00:39:36.535 lat (usec): min=40882, max=42088, avg=40997.56, stdev=101.69 00:39:36.535 clat percentiles (usec): 00:39:36.535 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:36.535 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:36.535 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:36.535 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:36.535 | 99.99th=[42206] 00:39:36.535 bw ( KiB/s): min= 384, max= 416, per=49.73%, avg=388.80, stdev=11.72, samples=20 00:39:36.535 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:39:36.535 lat (msec) : 50=100.00% 00:39:36.535 cpu : usr=95.40%, sys=4.39%, ctx=12, majf=0, minf=89 00:39:36.535 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:36.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.535 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.535 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:36.535 00:39:36.535 Run status group 0 (all jobs): 00:39:36.535 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-400kB/s), io=7808KiB (7995kB), run=10006-10007msec 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.535 00:39:36.535 real 0m11.399s 00:39:36.535 user 0m34.762s 00:39:36.535 sys 0m1.218s 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:36.535 13:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:36.535 ************************************ 00:39:36.535 END TEST fio_dif_1_multi_subsystems 00:39:36.535 ************************************ 00:39:36.535 13:23:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:36.535 13:23:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:36.535 13:23:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:36.535 13:23:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:36.797 ************************************ 00:39:36.797 START TEST fio_dif_rand_params 00:39:36.797 ************************************ 00:39:36.797 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:39:36.797 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:36.797 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:36.797 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:36.797 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:36.797 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:36.797 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.798 bdev_null0 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.798 [2024-11-29 13:23:39.269010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:36.798 { 00:39:36.798 "params": { 00:39:36.798 "name": "Nvme$subsystem", 00:39:36.798 "trtype": "$TEST_TRANSPORT", 00:39:36.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:36.798 "adrfam": "ipv4", 00:39:36.798 "trsvcid": "$NVMF_PORT", 00:39:36.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:36.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:36.798 "hdgst": ${hdgst:-false}, 00:39:36.798 "ddgst": ${ddgst:-false} 00:39:36.798 }, 00:39:36.798 "method": "bdev_nvme_attach_controller" 00:39:36.798 } 00:39:36.798 EOF 00:39:36.798 )") 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:36.798 "params": { 00:39:36.798 "name": "Nvme0", 00:39:36.798 "trtype": "tcp", 00:39:36.798 "traddr": "10.0.0.2", 00:39:36.798 "adrfam": "ipv4", 00:39:36.798 "trsvcid": "4420", 00:39:36.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:36.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:36.798 "hdgst": false, 00:39:36.798 "ddgst": false 00:39:36.798 }, 00:39:36.798 "method": "bdev_nvme_attach_controller" 00:39:36.798 }' 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:36.798 13:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:37.059 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:37.059 ... 00:39:37.059 fio-3.35 00:39:37.059 Starting 3 threads 00:39:43.647 00:39:43.647 filename0: (groupid=0, jobs=1): err= 0: pid=1233348: Fri Nov 29 13:23:45 2024 00:39:43.647 read: IOPS=302, BW=37.8MiB/s (39.7MB/s)(191MiB/5043msec) 00:39:43.647 slat (nsec): min=5575, max=35286, avg=7796.85, stdev=2933.39 00:39:43.647 clat (usec): min=4462, max=91893, avg=9877.31, stdev=5644.76 00:39:43.647 lat (usec): min=4471, max=91899, avg=9885.11, stdev=5645.06 00:39:43.647 clat percentiles (usec): 00:39:43.647 | 1.00th=[ 5080], 5.00th=[ 6390], 10.00th=[ 7373], 20.00th=[ 8094], 00:39:43.647 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9896], 00:39:43.647 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11207], 95.00th=[11863], 00:39:43.647 | 99.00th=[46400], 99.50th=[49021], 99.90th=[91751], 99.95th=[91751], 00:39:43.647 | 99.99th=[91751] 00:39:43.647 bw ( KiB/s): min=31488, max=43008, per=33.56%, avg=39014.40, stdev=3186.49, samples=10 00:39:43.647 iops : min= 246, max= 336, avg=304.80, stdev=24.89, samples=10 00:39:43.647 lat (msec) : 10=63.70%, 20=34.99%, 50=1.05%, 100=0.26% 00:39:43.647 cpu : usr=93.95%, sys=5.79%, ctx=9, majf=0, minf=82 00:39:43.647 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.647 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.647 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:43.647 filename0: (groupid=0, jobs=1): err= 0: pid=1233349: Fri Nov 29 13:23:45 2024 00:39:43.647 read: IOPS=303, BW=37.9MiB/s (39.8MB/s)(191MiB/5046msec) 00:39:43.647 slat (nsec): min=5625, max=48468, avg=8086.82, stdev=3380.91 00:39:43.647 clat (usec): min=4749, max=89139, avg=9849.04, stdev=6176.04 00:39:43.647 lat (usec): min=4773, max=89145, avg=9857.12, stdev=6176.65 00:39:43.647 clat percentiles (usec): 00:39:43.647 | 1.00th=[ 5669], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 7898], 00:39:43.647 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:39:43.647 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:39:43.647 | 99.00th=[49021], 99.50th=[49546], 99.90th=[88605], 99.95th=[89654], 00:39:43.647 | 99.99th=[89654] 00:39:43.647 bw ( KiB/s): min=26880, max=45312, per=33.67%, avg=39142.40, stdev=5352.86, samples=10 00:39:43.647 iops : min= 210, max= 354, avg=305.80, stdev=41.82, samples=10 00:39:43.647 lat (msec) : 10=74.79%, 20=23.25%, 50=1.63%, 100=0.33% 00:39:43.647 cpu : usr=93.82%, sys=5.93%, ctx=10, majf=0, minf=152 00:39:43.647 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.647 issued rwts: total=1531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.647 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:43.647 filename0: (groupid=0, jobs=1): err= 0: pid=1233350: Fri Nov 29 13:23:45 2024 00:39:43.647 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(191MiB/5046msec) 00:39:43.647 slat (nsec): min=5559, max=35276, avg=7527.67, stdev=1728.18 00:39:43.647 clat (usec): min=4354, max=51193, avg=9882.06, stdev=5685.43 00:39:43.647 lat (usec): min=4363, max=51199, avg=9889.59, stdev=5685.66 00:39:43.647 clat percentiles (usec): 00:39:43.647 | 1.00th=[ 5407], 5.00th=[ 6718], 10.00th=[ 7242], 20.00th=[ 7898], 00:39:43.647 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:39:43.647 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10683], 95.00th=[11207], 00:39:43.647 | 99.00th=[47449], 99.50th=[48497], 99.90th=[49021], 99.95th=[51119], 00:39:43.647 | 99.99th=[51119] 00:39:43.647 bw ( KiB/s): min=22272, max=43776, per=33.56%, avg=39014.40, stdev=6283.69, samples=10 00:39:43.647 iops : min= 174, max= 342, avg=304.80, stdev=49.09, samples=10 00:39:43.647 lat (msec) : 10=71.89%, 20=26.02%, 50=2.03%, 100=0.07% 00:39:43.647 cpu : usr=93.36%, sys=6.36%, ctx=8, majf=0, minf=69 00:39:43.647 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.647 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.647 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:43.647 00:39:43.647 Run status group 0 (all jobs): 00:39:43.647 READ: bw=114MiB/s (119MB/s), 37.8MiB/s-37.9MiB/s (39.6MB/s-39.8MB/s), io=573MiB (601MB), run=5043-5046msec 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.647 bdev_null0 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:43.647 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 [2024-11-29 13:23:45.424705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 bdev_null1 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 bdev_null2 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:43.648 { 00:39:43.648 "params": { 00:39:43.648 "name": "Nvme$subsystem", 00:39:43.648 "trtype": "$TEST_TRANSPORT", 00:39:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.648 "adrfam": "ipv4", 00:39:43.648 "trsvcid": "$NVMF_PORT", 00:39:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.648 "hdgst": ${hdgst:-false}, 00:39:43.648 "ddgst": ${ddgst:-false} 00:39:43.648 }, 00:39:43.648 "method": "bdev_nvme_attach_controller" 00:39:43.648 } 00:39:43.648 EOF 00:39:43.648 )") 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:43.648 { 00:39:43.648 "params": { 00:39:43.648 "name": "Nvme$subsystem", 00:39:43.648 "trtype": "$TEST_TRANSPORT", 00:39:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.648 "adrfam": "ipv4", 00:39:43.648 "trsvcid": "$NVMF_PORT", 00:39:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.648 "hdgst": ${hdgst:-false}, 00:39:43.648 "ddgst": ${ddgst:-false} 00:39:43.648 }, 00:39:43.648 "method": "bdev_nvme_attach_controller" 00:39:43.648 } 00:39:43.648 EOF 00:39:43.648 )") 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:43.648 { 00:39:43.648 "params": { 00:39:43.648 "name": "Nvme$subsystem", 00:39:43.648 "trtype": "$TEST_TRANSPORT", 00:39:43.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:43.648 "adrfam": "ipv4", 00:39:43.648 "trsvcid": "$NVMF_PORT", 00:39:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:43.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:43.648 "hdgst": ${hdgst:-false}, 00:39:43.648 "ddgst": ${ddgst:-false} 00:39:43.648 }, 00:39:43.648 "method": "bdev_nvme_attach_controller" 00:39:43.648 } 00:39:43.648 EOF 00:39:43.648 )") 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:43.648 13:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:43.648 "params": { 00:39:43.648 "name": "Nvme0", 00:39:43.648 "trtype": "tcp", 00:39:43.648 "traddr": "10.0.0.2", 00:39:43.648 "adrfam": "ipv4", 00:39:43.648 "trsvcid": "4420", 00:39:43.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:43.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:43.648 "hdgst": false, 00:39:43.648 "ddgst": false 00:39:43.648 }, 00:39:43.649 "method": "bdev_nvme_attach_controller" 00:39:43.649 },{ 00:39:43.649 "params": { 00:39:43.649 "name": "Nvme1", 00:39:43.649 "trtype": "tcp", 00:39:43.649 "traddr": "10.0.0.2", 00:39:43.649 "adrfam": "ipv4", 00:39:43.649 "trsvcid": "4420", 00:39:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:43.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:43.649 "hdgst": false, 00:39:43.649 "ddgst": false 00:39:43.649 }, 00:39:43.649 "method": "bdev_nvme_attach_controller" 00:39:43.649 },{ 00:39:43.649 "params": { 00:39:43.649 "name": "Nvme2", 00:39:43.649 "trtype": "tcp", 00:39:43.649 "traddr": "10.0.0.2", 00:39:43.649 "adrfam": "ipv4", 00:39:43.649 "trsvcid": "4420", 00:39:43.649 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:43.649 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:43.649 "hdgst": false, 00:39:43.649 "ddgst": false 00:39:43.649 }, 00:39:43.649 "method": "bdev_nvme_attach_controller" 00:39:43.649 }' 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:43.649 13:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:43.649 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:43.649 ... 00:39:43.649 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:43.649 ... 00:39:43.649 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:43.649 ... 00:39:43.649 fio-3.35 00:39:43.649 Starting 24 threads 00:39:55.880 00:39:55.880 filename0: (groupid=0, jobs=1): err= 0: pid=1234693: Fri Nov 29 13:23:57 2024 00:39:55.880 read: IOPS=682, BW=2728KiB/s (2794kB/s)(26.7MiB/10022msec) 00:39:55.880 slat (nsec): min=5755, max=95722, avg=10731.04, stdev=7608.61 00:39:55.880 clat (usec): min=916, max=26020, avg=23367.43, stdev=4404.85 00:39:55.880 lat (usec): min=933, max=26028, avg=23378.16, stdev=4403.53 00:39:55.880 clat percentiles (usec): 00:39:55.880 | 1.00th=[ 1336], 5.00th=[22938], 10.00th=[23725], 20.00th=[23987], 00:39:55.880 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:39:55.880 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:39:55.880 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26084], 99.95th=[26084], 00:39:55.880 | 99.99th=[26084] 00:39:55.880 bw ( KiB/s): min= 2560, max= 4640, per=4.38%, avg=2728.00, stdev=454.55, samples=20 00:39:55.880 iops : min= 640, max= 1160, avg=682.00, stdev=113.64, samples=20 00:39:55.880 lat (usec) : 1000=0.06% 00:39:55.880 lat (msec) : 2=2.72%, 4=0.56%, 10=0.47%, 20=1.17%, 50=95.03% 00:39:55.880 cpu : usr=98.78%, sys=0.92%, ctx=17, majf=0, minf=81 00:39:55.880 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:55.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 issued rwts: total=6836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.881 filename0: (groupid=0, jobs=1): err= 0: pid=1234694: Fri Nov 29 13:23:57 2024 00:39:55.881 read: IOPS=652, BW=2611KiB/s (2674kB/s)(25.8MiB/10126msec) 00:39:55.881 slat (nsec): min=5713, max=93371, avg=14359.47, stdev=13643.28 00:39:55.881 clat (msec): min=8, max=134, avg=24.39, stdev= 5.63 00:39:55.881 lat (msec): min=8, max=134, avg=24.40, stdev= 5.63 00:39:55.881 clat percentiles (msec): 00:39:55.881 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.881 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.881 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.881 | 99.00th=[ 28], 99.50th=[ 35], 99.90th=[ 133], 99.95th=[ 133], 00:39:55.881 | 99.99th=[ 136] 00:39:55.881 bw ( KiB/s): min= 2560, max= 2816, per=4.24%, avg=2638.35, stdev=84.32, samples=20 00:39:55.881 iops : min= 640, max= 704, avg=659.55, stdev=21.07, samples=20 00:39:55.881 lat (msec) : 10=0.03%, 20=2.42%, 50=97.31%, 250=0.24% 00:39:55.881 cpu : usr=98.50%, sys=1.01%, ctx=137, majf=0, minf=39 00:39:55.881 IO depths : 1=5.8%, 2=11.8%, 4=24.3%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:55.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 issued rwts: total=6610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.881 filename0: (groupid=0, jobs=1): err= 0: pid=1234695: Fri Nov 29 13:23:57 2024 00:39:55.881 read: IOPS=652, BW=2610KiB/s (2673kB/s)(25.8MiB/10127msec) 00:39:55.881 slat (nsec): min=5739, max=97939, avg=13970.39, stdev=10064.63 00:39:55.881 clat (msec): min=8, max=130, avg=24.40, stdev= 5.44 00:39:55.881 lat (msec): min=8, max=130, avg=24.42, stdev= 5.44 00:39:55.881 clat percentiles (msec): 00:39:55.881 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.881 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.881 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:39:55.881 | 99.00th=[ 26], 99.50th=[ 26], 99.90th=[ 131], 99.95th=[ 131], 00:39:55.881 | 99.99th=[ 131] 00:39:55.881 bw ( KiB/s): min= 2560, max= 2816, per=4.24%, avg=2637.55, stdev=76.68, samples=20 00:39:55.881 iops : min= 640, max= 704, avg=659.35, stdev=19.16, samples=20 00:39:55.881 lat (msec) : 10=0.03%, 20=1.42%, 50=98.31%, 250=0.24% 00:39:55.881 cpu : usr=98.84%, sys=0.85%, ctx=78, majf=0, minf=72 00:39:55.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:55.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.881 filename0: (groupid=0, jobs=1): err= 0: pid=1234696: Fri Nov 29 13:23:57 2024 00:39:55.881 read: IOPS=646, BW=2585KiB/s (2647kB/s)(25.4MiB/10075msec) 00:39:55.881 slat (nsec): min=5724, max=88901, avg=17285.77, stdev=13414.52 00:39:55.881 clat (msec): min=13, max=142, avg=24.47, stdev= 3.56 00:39:55.881 lat (msec): min=13, max=142, avg=24.49, stdev= 3.56 00:39:55.881 clat percentiles (msec): 00:39:55.881 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.881 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.881 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:39:55.881 | 99.00th=[ 26], 99.50th=[ 34], 99.90th=[ 83], 99.95th=[ 83], 00:39:55.881 | 99.99th=[ 142] 00:39:55.881 bw ( KiB/s): min= 2304, max= 2688, per=4.17%, avg=2598.85, stdev=100.31, samples=20 00:39:55.881 iops : min= 576, max= 672, avg=649.70, stdev=25.08, samples=20 00:39:55.881 lat (msec) : 20=0.37%, 50=99.39%, 100=0.21%, 250=0.03% 00:39:55.881 cpu : usr=98.89%, sys=0.79%, ctx=76, majf=0, minf=38 00:39:55.881 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:39:55.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.881 filename0: (groupid=0, jobs=1): err= 0: pid=1234697: Fri Nov 29 13:23:57 2024 00:39:55.881 read: IOPS=654, BW=2619KiB/s (2682kB/s)(25.9MiB/10127msec) 00:39:55.881 slat (nsec): min=5723, max=80350, avg=17028.77, stdev=10805.10 00:39:55.881 clat (msec): min=4, max=130, avg=24.29, stdev= 5.56 00:39:55.881 lat (msec): min=4, max=130, avg=24.31, stdev= 5.56 00:39:55.881 clat percentiles (msec): 00:39:55.881 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.881 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.881 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.881 | 99.00th=[ 26], 99.50th=[ 26], 99.90th=[ 131], 99.95th=[ 131], 00:39:55.881 | 99.99th=[ 131] 00:39:55.881 bw ( KiB/s): min= 2560, max= 2992, per=4.25%, avg=2646.35, stdev=103.61, samples=20 00:39:55.881 iops : min= 640, max= 748, avg=661.55, stdev=25.90, samples=20 00:39:55.881 lat (msec) : 10=0.80%, 20=0.98%, 50=97.98%, 250=0.24% 00:39:55.881 cpu : usr=99.08%, sys=0.62%, ctx=16, majf=0, minf=32 00:39:55.881 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:55.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 issued rwts: total=6630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.881 filename0: (groupid=0, jobs=1): err= 0: pid=1234698: Fri Nov 29 13:23:57 2024 00:39:55.881 read: IOPS=652, BW=2611KiB/s (2673kB/s)(25.7MiB/10079msec) 00:39:55.881 slat (nsec): min=5145, max=81612, avg=16314.35, stdev=9984.29 00:39:55.881 clat (msec): min=13, max=130, avg=24.37, stdev= 5.48 00:39:55.881 lat (msec): min=13, max=130, avg=24.38, stdev= 5.48 00:39:55.881 clat percentiles (msec): 00:39:55.881 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.881 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.881 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.881 | 99.00th=[ 28], 99.50th=[ 37], 99.90th=[ 131], 99.95th=[ 131], 00:39:55.881 | 99.99th=[ 131] 00:39:55.881 bw ( KiB/s): min= 2432, max= 2976, per=4.22%, avg=2625.05, stdev=111.77, samples=20 00:39:55.881 iops : min= 608, max= 744, avg=656.25, stdev=27.95, samples=20 00:39:55.881 lat (msec) : 20=2.80%, 50=96.96%, 250=0.24% 00:39:55.881 cpu : usr=98.24%, sys=1.17%, ctx=190, majf=0, minf=53 00:39:55.881 IO depths : 1=5.4%, 2=11.4%, 4=24.3%, 8=51.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:39:55.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.881 issued rwts: total=6578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.881 filename0: (groupid=0, jobs=1): err= 0: pid=1234699: Fri Nov 29 13:23:57 2024 00:39:55.881 read: IOPS=646, BW=2587KiB/s (2649kB/s)(25.5MiB/10077msec) 00:39:55.881 slat (nsec): min=5608, max=82713, avg=22819.51, stdev=12074.14 00:39:55.881 clat (msec): min=12, max=130, avg=24.53, stdev= 5.37 00:39:55.882 lat (msec): min=12, max=130, avg=24.56, stdev= 5.37 00:39:55.882 clat percentiles (msec): 00:39:55.882 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.882 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.882 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:39:55.882 | 99.00th=[ 27], 99.50th=[ 37], 99.90th=[ 130], 99.95th=[ 131], 00:39:55.882 | 99.99th=[ 131] 00:39:55.882 bw ( KiB/s): min= 2432, max= 2688, per=4.18%, avg=2601.25, stdev=83.23, samples=20 00:39:55.882 iops : min= 608, max= 672, avg=650.30, stdev=20.81, samples=20 00:39:55.882 lat (msec) : 20=0.57%, 50=99.19%, 250=0.25% 00:39:55.882 cpu : usr=98.87%, sys=0.87%, ctx=11, majf=0, minf=27 00:39:55.882 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:55.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 issued rwts: total=6518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.882 filename0: (groupid=0, jobs=1): err= 0: pid=1234700: Fri Nov 29 13:23:57 2024 00:39:55.882 read: IOPS=644, BW=2579KiB/s (2641kB/s)(25.4MiB/10100msec) 00:39:55.882 slat (usec): min=5, max=124, avg=29.83, stdev=18.09 00:39:55.882 clat (msec): min=13, max=130, avg=24.55, stdev= 5.36 00:39:55.882 lat (msec): min=13, max=130, avg=24.58, stdev= 5.36 00:39:55.882 clat percentiles (msec): 00:39:55.882 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.882 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.882 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:39:55.882 | 99.00th=[ 32], 99.50th=[ 34], 99.90th=[ 130], 99.95th=[ 130], 00:39:55.882 | 99.99th=[ 131] 00:39:55.882 bw ( KiB/s): min= 2427, max= 2688, per=4.17%, avg=2598.10, stdev=84.66, samples=20 00:39:55.882 iops : min= 606, max= 672, avg=649.45, stdev=21.26, samples=20 00:39:55.882 lat (msec) : 20=0.25%, 50=99.51%, 250=0.25% 00:39:55.882 cpu : usr=99.03%, sys=0.67%, ctx=28, majf=0, minf=65 00:39:55.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:55.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.882 filename1: (groupid=0, jobs=1): err= 0: pid=1234702: Fri Nov 29 13:23:57 2024 00:39:55.882 read: IOPS=647, BW=2589KiB/s (2651kB/s)(25.5MiB/10087msec) 00:39:55.882 slat (usec): min=5, max=106, avg=31.69, stdev=17.96 00:39:55.882 clat (msec): min=10, max=132, avg=24.43, stdev= 5.37 00:39:55.882 lat (msec): min=10, max=132, avg=24.46, stdev= 5.37 00:39:55.882 clat percentiles (msec): 00:39:55.882 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.882 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.882 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.882 | 99.00th=[ 26], 99.50th=[ 28], 99.90th=[ 132], 99.95th=[ 133], 00:39:55.882 | 99.99th=[ 133] 00:39:55.882 bw ( KiB/s): min= 2432, max= 2688, per=4.18%, avg=2603.90, stdev=75.21, samples=20 00:39:55.882 iops : min= 608, max= 672, avg=650.90, stdev=18.81, samples=20 00:39:55.882 lat (msec) : 20=0.09%, 50=99.66%, 250=0.25% 00:39:55.882 cpu : usr=98.73%, sys=0.90%, ctx=70, majf=0, minf=46 00:39:55.882 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:55.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 issued rwts: total=6528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.882 filename1: (groupid=0, jobs=1): err= 0: pid=1234703: Fri Nov 29 13:23:57 2024 00:39:55.882 read: IOPS=648, BW=2592KiB/s (2654kB/s)(25.6MiB/10098msec) 00:39:55.882 slat (usec): min=5, max=113, avg=31.00, stdev=17.94 00:39:55.882 clat (msec): min=13, max=132, avg=24.41, stdev= 5.37 00:39:55.882 lat (msec): min=13, max=132, avg=24.44, stdev= 5.37 00:39:55.882 clat percentiles (msec): 00:39:55.882 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.882 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.882 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.882 | 99.00th=[ 26], 99.50th=[ 26], 99.90th=[ 133], 99.95th=[ 133], 00:39:55.882 | 99.99th=[ 133] 00:39:55.882 bw ( KiB/s): min= 2432, max= 2688, per=4.19%, avg=2610.60, stdev=77.02, samples=20 00:39:55.882 iops : min= 608, max= 672, avg=652.60, stdev=19.29, samples=20 00:39:55.882 lat (msec) : 20=0.31%, 50=99.45%, 250=0.24% 00:39:55.882 cpu : usr=98.65%, sys=0.89%, ctx=100, majf=0, minf=35 00:39:55.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:55.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 issued rwts: total=6544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.882 filename1: (groupid=0, jobs=1): err= 0: pid=1234704: Fri Nov 29 13:23:57 2024 00:39:55.882 read: IOPS=652, BW=2610KiB/s (2673kB/s)(25.8MiB/10127msec) 00:39:55.882 slat (usec): min=5, max=112, avg=21.72, stdev=15.71 00:39:55.882 clat (msec): min=10, max=130, avg=24.33, stdev= 5.41 00:39:55.882 lat (msec): min=10, max=130, avg=24.35, stdev= 5.41 00:39:55.882 clat percentiles (msec): 00:39:55.882 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.882 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.882 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.882 | 99.00th=[ 26], 99.50th=[ 26], 99.90th=[ 130], 99.95th=[ 130], 00:39:55.882 | 99.99th=[ 131] 00:39:55.882 bw ( KiB/s): min= 2560, max= 2816, per=4.24%, avg=2637.55, stdev=76.68, samples=20 00:39:55.882 iops : min= 640, max= 704, avg=659.35, stdev=19.16, samples=20 00:39:55.882 lat (msec) : 20=1.45%, 50=98.31%, 250=0.24% 00:39:55.882 cpu : usr=98.57%, sys=0.99%, ctx=161, majf=0, minf=39 00:39:55.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:55.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.882 filename1: (groupid=0, jobs=1): err= 0: pid=1234705: Fri Nov 29 13:23:57 2024 00:39:55.882 read: IOPS=651, BW=2606KiB/s (2669kB/s)(25.6MiB/10077msec) 00:39:55.882 slat (usec): min=5, max=109, avg=19.46, stdev=16.86 00:39:55.882 clat (msec): min=7, max=134, avg=24.43, stdev= 6.40 00:39:55.882 lat (msec): min=7, max=134, avg=24.45, stdev= 6.40 00:39:55.882 clat percentiles (msec): 00:39:55.882 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 24], 00:39:55.882 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.882 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 32], 00:39:55.882 | 99.00th=[ 40], 99.50th=[ 43], 99.90th=[ 134], 99.95th=[ 136], 00:39:55.882 | 99.99th=[ 136] 00:39:55.882 bw ( KiB/s): min= 2400, max= 2800, per=4.21%, avg=2620.45, stdev=85.81, samples=20 00:39:55.882 iops : min= 600, max= 700, avg=655.10, stdev=21.44, samples=20 00:39:55.882 lat (msec) : 10=0.12%, 20=8.74%, 50=90.86%, 100=0.06%, 250=0.21% 00:39:55.882 cpu : usr=98.95%, sys=0.70%, ctx=115, majf=0, minf=29 00:39:55.882 IO depths : 1=0.8%, 2=2.8%, 4=10.4%, 8=71.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:39:55.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.882 complete : 0=0.0%, 4=91.0%, 8=5.6%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 issued rwts: total=6566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.883 filename1: (groupid=0, jobs=1): err= 0: pid=1234706: Fri Nov 29 13:23:57 2024 00:39:55.883 read: IOPS=646, BW=2585KiB/s (2647kB/s)(25.4MiB/10075msec) 00:39:55.883 slat (usec): min=5, max=124, avg=32.52, stdev=22.12 00:39:55.883 clat (msec): min=19, max=132, avg=24.41, stdev= 5.42 00:39:55.883 lat (msec): min=19, max=132, avg=24.44, stdev= 5.42 00:39:55.883 clat percentiles (msec): 00:39:55.883 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.883 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:39:55.883 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.883 | 99.00th=[ 26], 99.50th=[ 29], 99.90th=[ 132], 99.95th=[ 133], 00:39:55.883 | 99.99th=[ 133] 00:39:55.883 bw ( KiB/s): min= 2304, max= 2693, per=4.17%, avg=2598.85, stdev=102.47, samples=20 00:39:55.883 iops : min= 576, max= 673, avg=649.70, stdev=25.60, samples=20 00:39:55.883 lat (msec) : 20=0.03%, 50=99.72%, 250=0.25% 00:39:55.883 cpu : usr=99.21%, sys=0.52%, ctx=13, majf=0, minf=25 00:39:55.883 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:55.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.883 filename1: (groupid=0, jobs=1): err= 0: pid=1234707: Fri Nov 29 13:23:57 2024 00:39:55.883 read: IOPS=649, BW=2597KiB/s (2659kB/s)(25.6MiB/10076msec) 00:39:55.883 slat (usec): min=4, max=114, avg=28.66, stdev=17.64 00:39:55.883 clat (msec): min=12, max=132, avg=24.37, stdev= 5.47 00:39:55.883 lat (msec): min=12, max=132, avg=24.40, stdev= 5.47 00:39:55.883 clat percentiles (msec): 00:39:55.883 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.883 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.883 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.883 | 99.00th=[ 26], 99.50th=[ 29], 99.90th=[ 133], 99.95th=[ 133], 00:39:55.883 | 99.99th=[ 133] 00:39:55.883 bw ( KiB/s): min= 2304, max= 2792, per=4.19%, avg=2610.25, stdev=102.91, samples=20 00:39:55.883 iops : min= 576, max= 698, avg=652.55, stdev=25.72, samples=20 00:39:55.883 lat (msec) : 20=0.92%, 50=98.84%, 250=0.24% 00:39:55.883 cpu : usr=99.07%, sys=0.64%, ctx=39, majf=0, minf=56 00:39:55.883 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:55.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 issued rwts: total=6541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.883 filename1: (groupid=0, jobs=1): err= 0: pid=1234708: Fri Nov 29 13:23:57 2024 00:39:55.883 read: IOPS=649, BW=2600KiB/s (2662kB/s)(25.7MiB/10118msec) 00:39:55.883 slat (usec): min=5, max=110, avg=23.80, stdev=18.97 00:39:55.883 clat (msec): min=10, max=132, avg=24.43, stdev= 5.44 00:39:55.883 lat (msec): min=10, max=132, avg=24.45, stdev= 5.43 00:39:55.883 clat percentiles (msec): 00:39:55.883 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.883 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.883 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.883 | 99.00th=[ 26], 99.50th=[ 26], 99.90th=[ 133], 99.95th=[ 133], 00:39:55.883 | 99.99th=[ 133] 00:39:55.883 bw ( KiB/s): min= 2560, max= 2816, per=4.22%, avg=2625.00, stdev=77.72, samples=20 00:39:55.883 iops : min= 640, max= 704, avg=656.20, stdev=19.43, samples=20 00:39:55.883 lat (msec) : 20=1.09%, 50=98.66%, 250=0.24% 00:39:55.883 cpu : usr=99.11%, sys=0.59%, ctx=24, majf=0, minf=48 00:39:55.883 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:55.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.883 filename1: (groupid=0, jobs=1): err= 0: pid=1234709: Fri Nov 29 13:23:57 2024 00:39:55.883 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.9MiB/10124msec) 00:39:55.883 slat (usec): min=5, max=115, avg=17.88, stdev=14.60 00:39:55.883 clat (msec): min=7, max=131, avg=24.27, stdev= 5.74 00:39:55.883 lat (msec): min=7, max=131, avg=24.29, stdev= 5.74 00:39:55.883 clat percentiles (msec): 00:39:55.883 | 1.00th=[ 13], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.883 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.883 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:39:55.883 | 99.00th=[ 31], 99.50th=[ 35], 99.90th=[ 132], 99.95th=[ 132], 00:39:55.883 | 99.99th=[ 132] 00:39:55.883 bw ( KiB/s): min= 2560, max= 2944, per=4.26%, avg=2650.35, stdev=90.52, samples=20 00:39:55.883 iops : min= 640, max= 736, avg=662.55, stdev=22.65, samples=20 00:39:55.883 lat (msec) : 10=0.48%, 20=4.01%, 50=95.27%, 250=0.24% 00:39:55.883 cpu : usr=98.94%, sys=0.75%, ctx=37, majf=0, minf=70 00:39:55.883 IO depths : 1=0.6%, 2=4.9%, 4=19.9%, 8=62.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:39:55.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.883 filename2: (groupid=0, jobs=1): err= 0: pid=1234710: Fri Nov 29 13:23:57 2024 00:39:55.883 read: IOPS=656, BW=2626KiB/s (2689kB/s)(25.8MiB/10064msec) 00:39:55.883 slat (usec): min=5, max=124, avg=20.97, stdev=19.37 00:39:55.883 clat (usec): min=7647, max=82257, avg=24191.75, stdev=3381.09 00:39:55.883 lat (usec): min=7655, max=82264, avg=24212.71, stdev=3380.02 00:39:55.883 clat percentiles (usec): 00:39:55.883 | 1.00th=[11863], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:39:55.883 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:39:55.883 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:39:55.883 | 99.00th=[25297], 99.50th=[25560], 99.90th=[82314], 99.95th=[82314], 00:39:55.883 | 99.99th=[82314] 00:39:55.883 bw ( KiB/s): min= 2560, max= 2944, per=4.24%, avg=2637.55, stdev=96.24, samples=20 00:39:55.883 iops : min= 640, max= 736, avg=659.35, stdev=24.07, samples=20 00:39:55.883 lat (msec) : 10=0.48%, 20=1.42%, 50=97.85%, 100=0.24% 00:39:55.883 cpu : usr=98.96%, sys=0.77%, ctx=13, majf=0, minf=31 00:39:55.883 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:55.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.883 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.883 filename2: (groupid=0, jobs=1): err= 0: pid=1234711: Fri Nov 29 13:23:57 2024 00:39:55.883 read: IOPS=648, BW=2595KiB/s (2657kB/s)(25.6MiB/10113msec) 00:39:55.883 slat (usec): min=5, max=109, avg=23.25, stdev=17.84 00:39:55.883 clat (msec): min=11, max=134, avg=24.48, stdev= 5.41 00:39:55.883 lat (msec): min=11, max=134, avg=24.50, stdev= 5.40 00:39:55.883 clat percentiles (msec): 00:39:55.883 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.883 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.883 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.883 | 99.00th=[ 26], 99.50th=[ 27], 99.90th=[ 133], 99.95th=[ 133], 00:39:55.883 | 99.99th=[ 134] 00:39:55.883 bw ( KiB/s): min= 2560, max= 2688, per=4.20%, avg=2617.30, stdev=65.01, samples=20 00:39:55.883 iops : min= 640, max= 672, avg=654.30, stdev=16.23, samples=20 00:39:55.884 lat (msec) : 20=0.55%, 50=99.21%, 250=0.24% 00:39:55.884 cpu : usr=98.86%, sys=0.85%, ctx=20, majf=0, minf=34 00:39:55.884 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:55.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.884 filename2: (groupid=0, jobs=1): err= 0: pid=1234712: Fri Nov 29 13:23:57 2024 00:39:55.884 read: IOPS=646, BW=2584KiB/s (2646kB/s)(25.4MiB/10080msec) 00:39:55.884 slat (usec): min=5, max=112, avg=29.03, stdev=17.53 00:39:55.884 clat (msec): min=16, max=132, avg=24.48, stdev= 5.43 00:39:55.884 lat (msec): min=16, max=132, avg=24.51, stdev= 5.43 00:39:55.884 clat percentiles (msec): 00:39:55.884 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.884 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.884 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 25], 00:39:55.884 | 99.00th=[ 26], 99.50th=[ 33], 99.90th=[ 133], 99.95th=[ 133], 00:39:55.884 | 99.99th=[ 133] 00:39:55.884 bw ( KiB/s): min= 2304, max= 2688, per=4.17%, avg=2597.50, stdev=102.16, samples=20 00:39:55.884 iops : min= 576, max= 672, avg=649.30, stdev=25.51, samples=20 00:39:55.884 lat (msec) : 20=0.09%, 50=99.66%, 250=0.25% 00:39:55.884 cpu : usr=98.91%, sys=0.73%, ctx=52, majf=0, minf=29 00:39:55.884 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:55.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.884 filename2: (groupid=0, jobs=1): err= 0: pid=1234713: Fri Nov 29 13:23:57 2024 00:39:55.884 read: IOPS=645, BW=2583KiB/s (2645kB/s)(25.4MiB/10077msec) 00:39:55.884 slat (nsec): min=5281, max=93907, avg=24686.34, stdev=12409.31 00:39:55.884 clat (msec): min=14, max=130, avg=24.56, stdev= 5.43 00:39:55.884 lat (msec): min=14, max=130, avg=24.58, stdev= 5.43 00:39:55.884 clat percentiles (msec): 00:39:55.884 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.884 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.884 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:39:55.884 | 99.00th=[ 33], 99.50th=[ 34], 99.90th=[ 130], 99.95th=[ 130], 00:39:55.884 | 99.99th=[ 131] 00:39:55.884 bw ( KiB/s): min= 2304, max= 2693, per=4.17%, avg=2597.25, stdev=101.24, samples=20 00:39:55.884 iops : min= 576, max= 673, avg=649.30, stdev=25.30, samples=20 00:39:55.884 lat (msec) : 20=0.88%, 50=98.88%, 250=0.25% 00:39:55.884 cpu : usr=98.34%, sys=1.09%, ctx=136, majf=0, minf=53 00:39:55.884 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:55.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 issued rwts: total=6508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.884 filename2: (groupid=0, jobs=1): err= 0: pid=1234714: Fri Nov 29 13:23:57 2024 00:39:55.884 read: IOPS=651, BW=2604KiB/s (2667kB/s)(25.8MiB/10144msec) 00:39:55.884 slat (usec): min=5, max=112, avg=20.17, stdev=16.53 00:39:55.884 clat (msec): min=6, max=147, avg=24.37, stdev= 6.60 00:39:55.884 lat (msec): min=6, max=147, avg=24.39, stdev= 6.60 00:39:55.884 clat percentiles (msec): 00:39:55.884 | 1.00th=[ 12], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 24], 00:39:55.884 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.884 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 31], 00:39:55.884 | 99.00th=[ 42], 99.50th=[ 45], 99.90th=[ 126], 99.95th=[ 148], 00:39:55.884 | 99.99th=[ 148] 00:39:55.884 bw ( KiB/s): min= 2452, max= 2888, per=4.23%, avg=2635.85, stdev=115.25, samples=20 00:39:55.884 iops : min= 613, max= 722, avg=658.95, stdev=28.80, samples=20 00:39:55.884 lat (msec) : 10=0.42%, 20=10.05%, 50=89.28%, 250=0.24% 00:39:55.884 cpu : usr=98.75%, sys=0.79%, ctx=71, majf=0, minf=55 00:39:55.884 IO depths : 1=2.1%, 2=5.0%, 4=15.7%, 8=66.4%, 16=10.9%, 32=0.0%, >=64=0.0% 00:39:55.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 complete : 0=0.0%, 4=91.8%, 8=3.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 issued rwts: total=6604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.884 filename2: (groupid=0, jobs=1): err= 0: pid=1234715: Fri Nov 29 13:23:57 2024 00:39:55.884 read: IOPS=669, BW=2680KiB/s (2744kB/s)(26.4MiB/10078msec) 00:39:55.884 slat (usec): min=5, max=105, avg=18.96, stdev=14.75 00:39:55.884 clat (msec): min=7, max=105, avg=23.71, stdev= 5.36 00:39:55.884 lat (msec): min=7, max=105, avg=23.73, stdev= 5.37 00:39:55.884 clat percentiles (msec): 00:39:55.884 | 1.00th=[ 16], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 22], 00:39:55.884 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.884 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 30], 00:39:55.884 | 99.00th=[ 41], 99.50th=[ 45], 99.90th=[ 92], 99.95th=[ 106], 00:39:55.884 | 99.99th=[ 106] 00:39:55.884 bw ( KiB/s): min= 2368, max= 2976, per=4.33%, avg=2694.85, stdev=168.34, samples=20 00:39:55.884 iops : min= 592, max= 744, avg=673.70, stdev=42.09, samples=20 00:39:55.884 lat (msec) : 10=0.15%, 20=16.96%, 50=82.51%, 100=0.30%, 250=0.09% 00:39:55.884 cpu : usr=98.87%, sys=0.85%, ctx=14, majf=0, minf=66 00:39:55.884 IO depths : 1=2.2%, 2=4.8%, 4=12.1%, 8=69.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:39:55.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 complete : 0=0.0%, 4=90.8%, 8=5.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.884 filename2: (groupid=0, jobs=1): err= 0: pid=1234716: Fri Nov 29 13:23:57 2024 00:39:55.884 read: IOPS=645, BW=2583KiB/s (2645kB/s)(25.4MiB/10077msec) 00:39:55.884 slat (nsec): min=5721, max=83511, avg=15495.63, stdev=10781.07 00:39:55.884 clat (msec): min=12, max=130, avg=24.50, stdev= 3.61 00:39:55.884 lat (msec): min=12, max=130, avg=24.52, stdev= 3.61 00:39:55.884 clat percentiles (msec): 00:39:55.884 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:39:55.884 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.884 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:39:55.884 | 99.00th=[ 32], 99.50th=[ 43], 99.90th=[ 83], 99.95th=[ 83], 00:39:55.884 | 99.99th=[ 131] 00:39:55.884 bw ( KiB/s): min= 2320, max= 2688, per=4.17%, avg=2597.25, stdev=107.93, samples=20 00:39:55.884 iops : min= 580, max= 672, avg=649.30, stdev=26.99, samples=20 00:39:55.884 lat (msec) : 20=0.77%, 50=98.99%, 100=0.22%, 250=0.03% 00:39:55.884 cpu : usr=98.95%, sys=0.67%, ctx=116, majf=0, minf=53 00:39:55.884 IO depths : 1=4.9%, 2=11.0%, 4=24.6%, 8=51.8%, 16=7.6%, 32=0.0%, >=64=0.0% 00:39:55.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.884 issued rwts: total=6508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.884 filename2: (groupid=0, jobs=1): err= 0: pid=1234717: Fri Nov 29 13:23:57 2024 00:39:55.884 read: IOPS=647, BW=2591KiB/s (2653kB/s)(25.5MiB/10079msec) 00:39:55.884 slat (nsec): min=5714, max=86371, avg=16192.98, stdev=12565.81 00:39:55.884 clat (msec): min=10, max=139, avg=24.60, stdev= 5.26 00:39:55.884 lat (msec): min=10, max=139, avg=24.62, stdev= 5.26 00:39:55.884 clat percentiles (msec): 00:39:55.884 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 25], 00:39:55.884 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:39:55.884 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:39:55.884 | 99.00th=[ 33], 99.50th=[ 37], 99.90th=[ 131], 99.95th=[ 140], 00:39:55.884 | 99.99th=[ 140] 00:39:55.884 bw ( KiB/s): min= 2352, max= 2688, per=4.18%, avg=2605.05, stdev=80.14, samples=20 00:39:55.884 iops : min= 588, max= 672, avg=651.25, stdev=20.03, samples=20 00:39:55.885 lat (msec) : 20=2.11%, 50=97.64%, 100=0.06%, 250=0.18% 00:39:55.885 cpu : usr=98.56%, sys=0.99%, ctx=72, majf=0, minf=53 00:39:55.885 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=80.3%, 16=18.0%, 32=0.0%, >=64=0.0% 00:39:55.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.885 complete : 0=0.0%, 4=89.6%, 8=9.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.885 issued rwts: total=6528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:55.885 00:39:55.885 Run status group 0 (all jobs): 00:39:55.885 READ: bw=60.8MiB/s (63.8MB/s), 2579KiB/s-2728KiB/s (2641kB/s-2794kB/s), io=617MiB (647MB), run=10022-10144msec 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 bdev_null0 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 [2024-11-29 13:23:57.405875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 bdev_null1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.885 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:55.886 { 00:39:55.886 "params": { 00:39:55.886 "name": "Nvme$subsystem", 00:39:55.886 "trtype": "$TEST_TRANSPORT", 00:39:55.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:55.886 "adrfam": "ipv4", 00:39:55.886 "trsvcid": "$NVMF_PORT", 00:39:55.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:55.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:55.886 "hdgst": ${hdgst:-false}, 00:39:55.886 "ddgst": ${ddgst:-false} 00:39:55.886 }, 00:39:55.886 "method": "bdev_nvme_attach_controller" 00:39:55.886 } 00:39:55.886 EOF 00:39:55.886 )") 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:55.886 { 00:39:55.886 "params": { 00:39:55.886 "name": "Nvme$subsystem", 00:39:55.886 "trtype": "$TEST_TRANSPORT", 00:39:55.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:55.886 "adrfam": "ipv4", 00:39:55.886 "trsvcid": "$NVMF_PORT", 00:39:55.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:55.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:55.886 "hdgst": ${hdgst:-false}, 00:39:55.886 "ddgst": ${ddgst:-false} 00:39:55.886 }, 00:39:55.886 "method": "bdev_nvme_attach_controller" 00:39:55.886 } 00:39:55.886 EOF 00:39:55.886 )") 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:55.886 "params": { 00:39:55.886 "name": "Nvme0", 00:39:55.886 "trtype": "tcp", 00:39:55.886 "traddr": "10.0.0.2", 00:39:55.886 "adrfam": "ipv4", 00:39:55.886 "trsvcid": "4420", 00:39:55.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:55.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:55.886 "hdgst": false, 00:39:55.886 "ddgst": false 00:39:55.886 }, 00:39:55.886 "method": "bdev_nvme_attach_controller" 00:39:55.886 },{ 00:39:55.886 "params": { 00:39:55.886 "name": "Nvme1", 00:39:55.886 "trtype": "tcp", 00:39:55.886 "traddr": "10.0.0.2", 00:39:55.886 "adrfam": "ipv4", 00:39:55.886 "trsvcid": "4420", 00:39:55.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:55.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:55.886 "hdgst": false, 00:39:55.886 "ddgst": false 00:39:55.886 }, 00:39:55.886 "method": "bdev_nvme_attach_controller" 00:39:55.886 }' 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:55.886 13:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:55.886 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:55.886 ... 00:39:55.886 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:55.886 ... 00:39:55.886 fio-3.35 00:39:55.886 Starting 4 threads 00:40:01.176 00:40:01.176 filename0: (groupid=0, jobs=1): err= 0: pid=1237208: Fri Nov 29 13:24:03 2024 00:40:01.176 read: IOPS=2939, BW=23.0MiB/s (24.1MB/s)(115MiB/5001msec) 00:40:01.176 slat (nsec): min=5517, max=92846, avg=7691.56, stdev=3357.04 00:40:01.176 clat (usec): min=714, max=4744, avg=2701.36, stdev=295.05 00:40:01.176 lat (usec): min=722, max=4752, avg=2709.05, stdev=294.97 00:40:01.176 clat percentiles (usec): 00:40:01.176 | 1.00th=[ 1975], 5.00th=[ 2212], 10.00th=[ 2409], 20.00th=[ 2573], 00:40:01.176 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:40:01.176 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 3163], 00:40:01.176 | 99.00th=[ 3818], 99.50th=[ 4047], 99.90th=[ 4424], 99.95th=[ 4490], 00:40:01.176 | 99.99th=[ 4752] 00:40:01.176 bw ( KiB/s): min=23088, max=23952, per=25.21%, avg=23455.78, stdev=313.55, samples=9 00:40:01.176 iops : min= 2886, max= 2994, avg=2931.89, stdev=39.24, samples=9 00:40:01.176 lat (usec) : 750=0.01%, 1000=0.02% 00:40:01.176 lat (msec) : 2=1.14%, 4=98.27%, 10=0.56% 00:40:01.176 cpu : usr=96.58%, sys=3.14%, ctx=8, majf=0, minf=48 00:40:01.176 IO depths : 1=0.1%, 2=0.2%, 4=70.7%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.176 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.176 issued rwts: total=14702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:01.176 filename0: (groupid=0, jobs=1): err= 0: pid=1237209: Fri Nov 29 13:24:03 2024 00:40:01.176 read: IOPS=2905, BW=22.7MiB/s (23.8MB/s)(114MiB/5002msec) 00:40:01.176 slat (usec): min=5, max=1281, avg= 9.05, stdev=11.33 00:40:01.176 clat (usec): min=1043, max=4769, avg=2728.93, stdev=229.05 00:40:01.176 lat (usec): min=1061, max=4775, avg=2737.98, stdev=229.07 00:40:01.176 clat percentiles (usec): 00:40:01.176 | 1.00th=[ 2040], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2671], 00:40:01.176 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:40:01.176 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 3032], 00:40:01.176 | 99.00th=[ 3589], 99.50th=[ 3720], 99.90th=[ 4293], 99.95th=[ 4424], 00:40:01.176 | 99.99th=[ 4752] 00:40:01.176 bw ( KiB/s): min=23120, max=23632, per=25.01%, avg=23276.44, stdev=149.40, samples=9 00:40:01.176 iops : min= 2890, max= 2954, avg=2909.56, stdev=18.68, samples=9 00:40:01.176 lat (msec) : 2=0.76%, 4=99.06%, 10=0.18% 00:40:01.176 cpu : usr=96.32%, sys=3.40%, ctx=7, majf=0, minf=90 00:40:01.176 IO depths : 1=0.1%, 2=0.2%, 4=72.1%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.176 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.176 issued rwts: total=14533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:01.176 filename1: (groupid=0, jobs=1): err= 0: pid=1237210: Fri Nov 29 13:24:03 2024 00:40:01.176 read: IOPS=2876, BW=22.5MiB/s (23.6MB/s)(112MiB/5002msec) 00:40:01.176 slat (nsec): min=8035, max=72891, avg=9761.53, stdev=4264.24 00:40:01.176 clat (usec): min=1200, max=5372, avg=2755.47, stdev=290.93 00:40:01.176 lat (usec): min=1208, max=5398, avg=2765.23, stdev=291.11 00:40:01.176 clat percentiles (usec): 00:40:01.176 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2671], 00:40:01.176 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:40:01.176 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2966], 95.00th=[ 3228], 00:40:01.176 | 99.00th=[ 3949], 99.50th=[ 4228], 99.90th=[ 4948], 99.95th=[ 5276], 00:40:01.176 | 99.99th=[ 5342] 00:40:01.176 bw ( KiB/s): min=22592, max=23328, per=24.74%, avg=23025.78, stdev=219.15, samples=9 00:40:01.176 iops : min= 2824, max= 2916, avg=2878.22, stdev=27.39, samples=9 00:40:01.176 lat (msec) : 2=0.72%, 4=98.46%, 10=0.82% 00:40:01.176 cpu : usr=96.24%, sys=3.44%, ctx=17, majf=0, minf=123 00:40:01.176 IO depths : 1=0.1%, 2=0.4%, 4=71.0%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.176 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.176 issued rwts: total=14387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:01.176 filename1: (groupid=0, jobs=1): err= 0: pid=1237211: Fri Nov 29 13:24:03 2024 00:40:01.176 read: IOPS=2911, BW=22.7MiB/s (23.8MB/s)(114MiB/5002msec) 00:40:01.176 slat (nsec): min=5516, max=94266, avg=9232.22, stdev=4041.26 00:40:01.176 clat (usec): min=1155, max=5112, avg=2723.56, stdev=239.55 00:40:01.176 lat (usec): min=1163, max=5120, avg=2732.79, stdev=239.61 00:40:01.176 clat percentiles (usec): 00:40:01.176 | 1.00th=[ 2040], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2671], 00:40:01.176 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:40:01.176 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 3032], 00:40:01.176 | 99.00th=[ 3589], 99.50th=[ 3720], 99.90th=[ 4293], 99.95th=[ 4359], 00:40:01.176 | 99.99th=[ 5080] 00:40:01.176 bw ( KiB/s): min=22992, max=23712, per=25.04%, avg=23297.56, stdev=252.66, samples=9 00:40:01.176 iops : min= 2874, max= 2964, avg=2912.11, stdev=31.63, samples=9 00:40:01.176 lat (msec) : 2=0.78%, 4=99.02%, 10=0.21% 00:40:01.176 cpu : usr=96.12%, sys=3.56%, ctx=6, majf=0, minf=58 00:40:01.176 IO depths : 1=0.1%, 2=0.3%, 4=71.2%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.176 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.176 issued rwts: total=14561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:01.177 00:40:01.177 Run status group 0 (all jobs): 00:40:01.177 READ: bw=90.9MiB/s (95.3MB/s), 22.5MiB/s-23.0MiB/s (23.6MB/s-24.1MB/s), io=455MiB (477MB), run=5001-5002msec 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.177 00:40:01.177 real 0m24.572s 00:40:01.177 user 5m18.283s 00:40:01.177 sys 0m5.762s 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:01.177 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:01.177 ************************************ 00:40:01.177 END TEST fio_dif_rand_params 00:40:01.177 ************************************ 00:40:01.177 13:24:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:01.177 13:24:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:01.177 13:24:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:01.177 13:24:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:01.439 ************************************ 00:40:01.439 START TEST fio_dif_digest 00:40:01.439 ************************************ 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:01.439 bdev_null0 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:01.439 [2024-11-29 13:24:03.923525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:01.439 { 00:40:01.439 "params": { 00:40:01.439 "name": "Nvme$subsystem", 00:40:01.439 "trtype": "$TEST_TRANSPORT", 00:40:01.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:01.439 "adrfam": "ipv4", 00:40:01.439 "trsvcid": "$NVMF_PORT", 00:40:01.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:01.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:01.439 "hdgst": ${hdgst:-false}, 00:40:01.439 "ddgst": ${ddgst:-false} 00:40:01.439 }, 00:40:01.439 "method": "bdev_nvme_attach_controller" 00:40:01.439 } 00:40:01.439 EOF 00:40:01.439 )") 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:01.439 "params": { 00:40:01.439 "name": "Nvme0", 00:40:01.439 "trtype": "tcp", 00:40:01.439 "traddr": "10.0.0.2", 00:40:01.439 "adrfam": "ipv4", 00:40:01.439 "trsvcid": "4420", 00:40:01.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:01.439 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:01.439 "hdgst": true, 00:40:01.439 "ddgst": true 00:40:01.439 }, 00:40:01.439 "method": "bdev_nvme_attach_controller" 00:40:01.439 }' 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:01.439 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:01.439 13:24:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:01.439 13:24:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:01.439 13:24:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:01.439 13:24:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:01.700 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:01.700 ... 00:40:01.700 fio-3.35 00:40:01.700 Starting 3 threads 00:40:13.934 00:40:13.934 filename0: (groupid=0, jobs=1): err= 0: pid=1238533: Fri Nov 29 13:24:14 2024 00:40:13.934 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(300MiB/10046msec) 00:40:13.934 slat (nsec): min=5910, max=35235, avg=7333.05, stdev=1446.46 00:40:13.934 clat (usec): min=5744, max=93303, avg=12536.63, stdev=11075.89 00:40:13.934 lat (usec): min=5754, max=93310, avg=12543.96, stdev=11075.86 00:40:13.934 clat percentiles (usec): 00:40:13.934 | 1.00th=[ 7439], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 8979], 00:40:13.934 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:40:13.934 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11338], 95.00th=[50070], 00:40:13.934 | 99.00th=[51643], 99.50th=[52167], 99.90th=[91751], 99.95th=[92799], 00:40:13.934 | 99.99th=[92799] 00:40:13.934 bw ( KiB/s): min=23552, max=38400, per=27.73%, avg=30681.60, stdev=4785.92, samples=20 00:40:13.934 iops : min= 184, max= 300, avg=239.70, stdev=37.39, samples=20 00:40:13.934 lat (msec) : 10=63.57%, 20=29.68%, 50=1.54%, 100=5.21% 00:40:13.934 cpu : usr=93.73%, sys=6.01%, ctx=18, majf=0, minf=109 00:40:13.934 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:13.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.934 issued rwts: total=2399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:13.934 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:13.934 filename0: (groupid=0, jobs=1): err= 0: pid=1238534: Fri Nov 29 13:24:14 2024 00:40:13.934 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(394MiB/10047msec) 00:40:13.934 slat (nsec): min=5919, max=55578, avg=7362.29, stdev=1683.94 00:40:13.934 clat (usec): min=5460, max=52712, avg=9534.27, stdev=2197.76 00:40:13.934 lat (usec): min=5466, max=52721, avg=9541.63, stdev=2197.80 00:40:13.934 clat percentiles (usec): 00:40:13.935 | 1.00th=[ 6194], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 7963], 00:40:13.935 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:40:13.935 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11731], 00:40:13.935 | 99.00th=[12518], 99.50th=[13042], 99.90th=[47449], 99.95th=[52691], 00:40:13.935 | 99.99th=[52691] 00:40:13.935 bw ( KiB/s): min=35328, max=43264, per=36.47%, avg=40345.60, stdev=2140.88, samples=20 00:40:13.935 iops : min= 276, max= 338, avg=315.20, stdev=16.73, samples=20 00:40:13.935 lat (msec) : 10=59.83%, 20=40.01%, 50=0.06%, 100=0.10% 00:40:13.935 cpu : usr=93.28%, sys=6.46%, ctx=19, majf=0, minf=182 00:40:13.935 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:13.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.935 issued rwts: total=3154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:13.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:13.935 filename0: (groupid=0, jobs=1): err= 0: pid=1238535: Fri Nov 29 13:24:14 2024 00:40:13.935 read: IOPS=311, BW=38.9MiB/s (40.8MB/s)(391MiB/10046msec) 00:40:13.935 slat (nsec): min=5903, max=37812, avg=6893.60, stdev=1228.53 00:40:13.935 clat (usec): min=5844, max=46236, avg=9593.03, stdev=1582.03 00:40:13.935 lat (usec): min=5851, max=46242, avg=9599.93, stdev=1582.01 00:40:13.935 clat percentiles (usec): 00:40:13.935 | 1.00th=[ 6456], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 8094], 00:40:13.935 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:40:13.935 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11338], 95.00th=[11731], 00:40:13.935 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13960], 99.95th=[15926], 00:40:13.935 | 99.99th=[46400] 00:40:13.935 bw ( KiB/s): min=37632, max=43264, per=36.21%, avg=40051.20, stdev=1563.76, samples=20 00:40:13.935 iops : min= 294, max= 338, avg=312.90, stdev=12.22, samples=20 00:40:13.935 lat (msec) : 10=54.60%, 20=45.37%, 50=0.03% 00:40:13.935 cpu : usr=92.74%, sys=6.99%, ctx=24, majf=0, minf=196 00:40:13.935 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:13.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:13.935 issued rwts: total=3130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:13.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:13.935 00:40:13.935 Run status group 0 (all jobs): 00:40:13.935 READ: bw=108MiB/s (113MB/s), 29.8MiB/s-39.2MiB/s (31.3MB/s-41.1MB/s), io=1085MiB (1138MB), run=10046-10047msec 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.935 00:40:13.935 real 0m11.215s 00:40:13.935 user 0m41.915s 00:40:13.935 sys 0m2.309s 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:13.935 13:24:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:13.935 ************************************ 00:40:13.935 END TEST fio_dif_digest 00:40:13.935 ************************************ 00:40:13.935 13:24:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:13.935 13:24:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:13.935 rmmod nvme_tcp 00:40:13.935 rmmod nvme_fabrics 00:40:13.935 rmmod nvme_keyring 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1228252 ']' 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1228252 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1228252 ']' 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1228252 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1228252 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1228252' 00:40:13.935 killing process with pid 1228252 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1228252 00:40:13.935 13:24:15 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1228252 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:13.935 13:24:15 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:16.483 Waiting for block devices as requested 00:40:16.483 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:16.483 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:16.483 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:16.483 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:16.483 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:16.744 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:16.744 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:16.744 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:17.005 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:17.005 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:17.266 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:17.266 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:17.266 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:17.527 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:17.527 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:17.527 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:17.788 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:18.049 13:24:20 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:18.049 13:24:20 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:18.049 13:24:20 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:40:18.049 13:24:20 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:40:18.049 13:24:20 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:18.049 13:24:20 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:40:18.049 13:24:20 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:18.049 13:24:20 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:18.049 13:24:20 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:18.049 13:24:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:18.049 13:24:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:19.964 13:24:22 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:19.964 00:40:19.964 real 1m18.571s 00:40:19.964 user 7m55.255s 00:40:19.964 sys 0m23.936s 00:40:19.964 13:24:22 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:19.964 13:24:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:19.964 ************************************ 00:40:19.964 END TEST nvmf_dif 00:40:19.964 ************************************ 00:40:20.226 13:24:22 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:20.226 13:24:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:20.226 13:24:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:20.226 13:24:22 -- common/autotest_common.sh@10 -- # set +x 00:40:20.226 ************************************ 00:40:20.226 START TEST nvmf_abort_qd_sizes 00:40:20.226 ************************************ 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:20.226 * Looking for test storage... 00:40:20.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:20.226 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:40:20.227 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:40:20.227 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:20.227 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.488 --rc genhtml_branch_coverage=1 00:40:20.488 --rc genhtml_function_coverage=1 00:40:20.488 --rc genhtml_legend=1 00:40:20.488 --rc geninfo_all_blocks=1 00:40:20.488 --rc geninfo_unexecuted_blocks=1 00:40:20.488 00:40:20.488 ' 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.488 --rc genhtml_branch_coverage=1 00:40:20.488 --rc genhtml_function_coverage=1 00:40:20.488 --rc genhtml_legend=1 00:40:20.488 --rc geninfo_all_blocks=1 00:40:20.488 --rc geninfo_unexecuted_blocks=1 00:40:20.488 00:40:20.488 ' 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.488 --rc genhtml_branch_coverage=1 00:40:20.488 --rc genhtml_function_coverage=1 00:40:20.488 --rc genhtml_legend=1 00:40:20.488 --rc geninfo_all_blocks=1 00:40:20.488 --rc geninfo_unexecuted_blocks=1 00:40:20.488 00:40:20.488 ' 00:40:20.488 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.489 --rc genhtml_branch_coverage=1 00:40:20.489 --rc genhtml_function_coverage=1 00:40:20.489 --rc genhtml_legend=1 00:40:20.489 --rc geninfo_all_blocks=1 00:40:20.489 --rc geninfo_unexecuted_blocks=1 00:40:20.489 00:40:20.489 ' 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:20.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:40:20.489 13:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:28.633 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:28.633 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:28.633 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:28.634 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:28.634 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:28.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:28.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:40:28.634 00:40:28.634 --- 10.0.0.2 ping statistics --- 00:40:28.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.634 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:28.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:28.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:40:28.634 00:40:28.634 --- 10.0.0.1 ping statistics --- 00:40:28.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.634 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:28.634 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:31.310 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:31.310 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:31.310 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:31.310 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:31.310 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:31.311 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:31.311 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:31.311 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:31.311 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:31.614 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:31.614 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:31.614 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:31.614 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:31.614 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:31.614 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:31.614 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:31.614 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1248595 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1248595 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1248595 ']' 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:31.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:31.886 13:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:32.147 [2024-11-29 13:24:34.580212] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:40:32.147 [2024-11-29 13:24:34.580270] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:32.147 [2024-11-29 13:24:34.679504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:32.147 [2024-11-29 13:24:34.737431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:32.147 [2024-11-29 13:24:34.737483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:32.147 [2024-11-29 13:24:34.737492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:32.147 [2024-11-29 13:24:34.737499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:32.147 [2024-11-29 13:24:34.737506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:32.147 [2024-11-29 13:24:34.739480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:32.147 [2024-11-29 13:24:34.739629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:32.147 [2024-11-29 13:24:34.739789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:32.147 [2024-11-29 13:24:34.739790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:32.718 13:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:32.718 13:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:40:32.978 13:24:35 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:32.978 13:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:32.978 13:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:32.978 13:24:35 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:32.978 13:24:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:32.979 13:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:32.979 ************************************ 00:40:32.979 START TEST spdk_target_abort 00:40:32.979 ************************************ 00:40:32.979 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:40:32.979 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:40:32.979 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:40:32.979 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.979 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:33.239 spdk_targetn1 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:33.239 [2024-11-29 13:24:35.800561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:33.239 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:33.240 [2024-11-29 13:24:35.848885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:33.240 13:24:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:33.501 [2024-11-29 13:24:36.139865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:664 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:40:33.501 [2024-11-29 13:24:36.139899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0054 p:1 m:0 dnr:0 00:40:33.501 [2024-11-29 13:24:36.172673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1728 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:40:33.501 [2024-11-29 13:24:36.172696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00da p:1 m:0 dnr:0 00:40:33.763 [2024-11-29 13:24:36.195649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2536 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:40:33.763 [2024-11-29 13:24:36.195670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:33.763 [2024-11-29 13:24:36.219809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3384 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:40:33.763 [2024-11-29 13:24:36.219830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00a8 p:0 m:0 dnr:0 00:40:33.763 [2024-11-29 13:24:36.220803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3456 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:40:33.763 [2024-11-29 13:24:36.220825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b1 p:0 m:0 dnr:0 00:40:37.062 Initializing NVMe Controllers 00:40:37.062 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:37.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:37.062 Initialization complete. Launching workers. 00:40:37.062 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11564, failed: 5 00:40:37.062 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2614, failed to submit 8955 00:40:37.063 success 754, unsuccessful 1860, failed 0 00:40:37.063 13:24:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:37.063 13:24:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:37.063 [2024-11-29 13:24:39.304469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:528 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:40:37.063 [2024-11-29 13:24:39.304507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:40:37.063 [2024-11-29 13:24:39.320321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:904 len:8 PRP1 0x200004e40000 PRP2 0x0 00:40:37.063 [2024-11-29 13:24:39.320345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:40:37.063 [2024-11-29 13:24:39.367151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:2024 len:8 PRP1 0x200004e42000 PRP2 0x0 00:40:37.063 [2024-11-29 13:24:39.367182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:40:37.063 [2024-11-29 13:24:39.431019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:3392 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:40:37.063 [2024-11-29 13:24:39.431042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00b7 p:0 m:0 dnr:0 00:40:37.063 [2024-11-29 13:24:39.455133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:4008 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:40:37.063 [2024-11-29 13:24:39.455155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:40:39.607 [2024-11-29 13:24:42.161018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:65704 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:40:39.607 [2024-11-29 13:24:42.161051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0017 p:1 m:0 dnr:0 00:40:39.868 Initializing NVMe Controllers 00:40:39.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:39.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:39.868 Initialization complete. Launching workers. 00:40:39.868 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8575, failed: 6 00:40:39.868 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7355 00:40:39.868 success 337, unsuccessful 889, failed 0 00:40:39.868 13:24:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:39.868 13:24:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:40.129 [2024-11-29 13:24:42.558134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:145 nsid:1 lba:2000 len:8 PRP1 0x200004ada000 PRP2 0x0 00:40:40.129 [2024-11-29 13:24:42.558161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:145 cdw0:0 sqhd:0031 p:1 m:0 dnr:0 00:40:40.129 [2024-11-29 13:24:42.573206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:161 nsid:1 lba:3728 len:8 PRP1 0x200004b16000 PRP2 0x0 00:40:40.129 [2024-11-29 13:24:42.573223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:161 cdw0:0 sqhd:0006 p:1 m:0 dnr:0 00:40:43.426 Initializing NVMe Controllers 00:40:43.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:43.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:43.426 Initialization complete. Launching workers. 00:40:43.426 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43642, failed: 2 00:40:43.426 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2852, failed to submit 40792 00:40:43.426 success 608, unsuccessful 2244, failed 0 00:40:43.426 13:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:43.426 13:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.426 13:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:43.426 13:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.426 13:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:43.426 13:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.426 13:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:44.809 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.809 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1248595 00:40:44.809 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1248595 ']' 00:40:44.809 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1248595 00:40:44.809 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:40:44.809 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:44.809 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1248595 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1248595' 00:40:45.070 killing process with pid 1248595 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1248595 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1248595 00:40:45.070 00:40:45.070 real 0m12.113s 00:40:45.070 user 0m49.370s 00:40:45.070 sys 0m1.990s 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:45.070 ************************************ 00:40:45.070 END TEST spdk_target_abort 00:40:45.070 ************************************ 00:40:45.070 13:24:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:45.070 13:24:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:45.070 13:24:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:45.070 13:24:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:45.070 ************************************ 00:40:45.070 START TEST kernel_target_abort 00:40:45.070 ************************************ 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:45.070 13:24:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:48.370 Waiting for block devices as requested 00:40:48.370 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:48.629 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:48.629 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:48.629 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:48.888 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:48.888 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:48.888 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:48.888 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:49.148 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:49.148 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:49.409 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:49.409 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:49.409 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:49.669 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:49.669 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:49.669 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:49.930 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:50.192 No valid GPT data, bailing 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:40:50.192 00:40:50.192 Discovery Log Number of Records 2, Generation counter 2 00:40:50.192 =====Discovery Log Entry 0====== 00:40:50.192 trtype: tcp 00:40:50.192 adrfam: ipv4 00:40:50.192 subtype: current discovery subsystem 00:40:50.192 treq: not specified, sq flow control disable supported 00:40:50.192 portid: 1 00:40:50.192 trsvcid: 4420 00:40:50.192 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:50.192 traddr: 10.0.0.1 00:40:50.192 eflags: none 00:40:50.192 sectype: none 00:40:50.192 =====Discovery Log Entry 1====== 00:40:50.192 trtype: tcp 00:40:50.192 adrfam: ipv4 00:40:50.192 subtype: nvme subsystem 00:40:50.192 treq: not specified, sq flow control disable supported 00:40:50.192 portid: 1 00:40:50.192 trsvcid: 4420 00:40:50.192 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:50.192 traddr: 10.0.0.1 00:40:50.192 eflags: none 00:40:50.192 sectype: none 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:50.192 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:50.453 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:53.754 Initializing NVMe Controllers 00:40:53.754 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:53.754 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:53.754 Initialization complete. Launching workers. 00:40:53.754 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67724, failed: 0 00:40:53.754 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67724, failed to submit 0 00:40:53.754 success 0, unsuccessful 67724, failed 0 00:40:53.754 13:24:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:53.754 13:24:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:57.054 Initializing NVMe Controllers 00:40:57.054 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:57.054 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:57.054 Initialization complete. Launching workers. 00:40:57.054 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117290, failed: 0 00:40:57.054 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29522, failed to submit 87768 00:40:57.054 success 0, unsuccessful 29522, failed 0 00:40:57.054 13:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:57.054 13:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:59.598 Initializing NVMe Controllers 00:40:59.598 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:59.598 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:59.598 Initialization complete. Launching workers. 00:40:59.598 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145515, failed: 0 00:40:59.598 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36398, failed to submit 109117 00:40:59.598 success 0, unsuccessful 36398, failed 0 00:40:59.598 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:59.598 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:59.598 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:40:59.598 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:59.598 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:59.598 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:59.598 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:59.598 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:40:59.598 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:40:59.858 13:25:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:03.155 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:03.155 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:03.414 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:03.414 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:03.414 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:05.329 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:05.329 00:41:05.329 real 0m20.283s 00:41:05.329 user 0m9.933s 00:41:05.329 sys 0m6.045s 00:41:05.329 13:25:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:05.329 13:25:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:05.329 ************************************ 00:41:05.329 END TEST kernel_target_abort 00:41:05.329 ************************************ 00:41:05.589 13:25:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:05.590 rmmod nvme_tcp 00:41:05.590 rmmod nvme_fabrics 00:41:05.590 rmmod nvme_keyring 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1248595 ']' 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1248595 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1248595 ']' 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1248595 00:41:05.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1248595) - No such process 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1248595 is not found' 00:41:05.590 Process with pid 1248595 is not found 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:05.590 13:25:08 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:08.891 Waiting for block devices as requested 00:41:08.891 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:08.891 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:08.891 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:09.152 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:09.152 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:09.152 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:09.414 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:09.414 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:09.414 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:09.675 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:09.675 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:09.936 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:09.936 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:09.936 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:10.203 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:10.203 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:10.203 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:10.774 13:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:12.687 13:25:15 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:12.687 00:41:12.687 real 0m52.553s 00:41:12.687 user 1m4.757s 00:41:12.687 sys 0m19.325s 00:41:12.687 13:25:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:12.687 13:25:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:12.687 ************************************ 00:41:12.687 END TEST nvmf_abort_qd_sizes 00:41:12.687 ************************************ 00:41:12.687 13:25:15 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:12.687 13:25:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:12.687 13:25:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:12.687 13:25:15 -- common/autotest_common.sh@10 -- # set +x 00:41:12.687 ************************************ 00:41:12.687 START TEST keyring_file 00:41:12.687 ************************************ 00:41:12.687 13:25:15 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:12.948 * Looking for test storage... 00:41:12.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:12.948 13:25:15 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:12.948 13:25:15 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:41:12.948 13:25:15 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:12.948 13:25:15 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@345 -- # : 1 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@353 -- # local d=1 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@355 -- # echo 1 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@353 -- # local d=2 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@355 -- # echo 2 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:12.948 13:25:15 keyring_file -- scripts/common.sh@368 -- # return 0 00:41:12.948 13:25:15 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:12.948 13:25:15 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.949 --rc genhtml_branch_coverage=1 00:41:12.949 --rc genhtml_function_coverage=1 00:41:12.949 --rc genhtml_legend=1 00:41:12.949 --rc geninfo_all_blocks=1 00:41:12.949 --rc geninfo_unexecuted_blocks=1 00:41:12.949 00:41:12.949 ' 00:41:12.949 13:25:15 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.949 --rc genhtml_branch_coverage=1 00:41:12.949 --rc genhtml_function_coverage=1 00:41:12.949 --rc genhtml_legend=1 00:41:12.949 --rc geninfo_all_blocks=1 00:41:12.949 --rc geninfo_unexecuted_blocks=1 00:41:12.949 00:41:12.949 ' 00:41:12.949 13:25:15 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.949 --rc genhtml_branch_coverage=1 00:41:12.949 --rc genhtml_function_coverage=1 00:41:12.949 --rc genhtml_legend=1 00:41:12.949 --rc geninfo_all_blocks=1 00:41:12.949 --rc geninfo_unexecuted_blocks=1 00:41:12.949 00:41:12.949 ' 00:41:12.949 13:25:15 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.949 --rc genhtml_branch_coverage=1 00:41:12.949 --rc genhtml_function_coverage=1 00:41:12.949 --rc genhtml_legend=1 00:41:12.949 --rc geninfo_all_blocks=1 00:41:12.949 --rc geninfo_unexecuted_blocks=1 00:41:12.949 00:41:12.949 ' 00:41:12.949 13:25:15 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:12.949 13:25:15 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:41:12.949 13:25:15 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:12.949 13:25:15 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:12.949 13:25:15 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:12.949 13:25:15 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.949 13:25:15 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.949 13:25:15 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.949 13:25:15 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:12.949 13:25:15 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@51 -- # : 0 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:12.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:12.949 13:25:15 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:12.949 13:25:15 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:12.949 13:25:15 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:12.949 13:25:15 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:12.949 13:25:15 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:12.949 13:25:15 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Zts8o4V0SZ 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:12.949 13:25:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:12.949 13:25:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Zts8o4V0SZ 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Zts8o4V0SZ 00:41:13.210 13:25:15 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Zts8o4V0SZ 00:41:13.210 13:25:15 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BJRSSCKpAf 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:13.210 13:25:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:13.210 13:25:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:13.210 13:25:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:13.210 13:25:15 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:41:13.210 13:25:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:13.210 13:25:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BJRSSCKpAf 00:41:13.210 13:25:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BJRSSCKpAf 00:41:13.210 13:25:15 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BJRSSCKpAf 00:41:13.210 13:25:15 keyring_file -- keyring/file.sh@30 -- # tgtpid=1258956 00:41:13.210 13:25:15 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1258956 00:41:13.210 13:25:15 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:13.210 13:25:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1258956 ']' 00:41:13.211 13:25:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:13.211 13:25:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:13.211 13:25:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:13.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:13.211 13:25:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:13.211 13:25:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:13.211 [2024-11-29 13:25:15.757374] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:41:13.211 [2024-11-29 13:25:15.757452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258956 ] 00:41:13.211 [2024-11-29 13:25:15.849526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:13.471 [2024-11-29 13:25:15.902611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:14.045 13:25:16 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:14.045 [2024-11-29 13:25:16.565423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:14.045 null0 00:41:14.045 [2024-11-29 13:25:16.597478] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:14.045 [2024-11-29 13:25:16.597913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.045 13:25:16 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:14.045 [2024-11-29 13:25:16.629542] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:14.045 request: 00:41:14.045 { 00:41:14.045 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:14.045 "secure_channel": false, 00:41:14.045 "listen_address": { 00:41:14.045 "trtype": "tcp", 00:41:14.045 "traddr": "127.0.0.1", 00:41:14.045 "trsvcid": "4420" 00:41:14.045 }, 00:41:14.045 "method": "nvmf_subsystem_add_listener", 00:41:14.045 "req_id": 1 00:41:14.045 } 00:41:14.045 Got JSON-RPC error response 00:41:14.045 response: 00:41:14.045 { 00:41:14.045 "code": -32602, 00:41:14.045 "message": "Invalid parameters" 00:41:14.045 } 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:14.045 13:25:16 keyring_file -- keyring/file.sh@47 -- # bperfpid=1258985 00:41:14.045 13:25:16 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1258985 /var/tmp/bperf.sock 00:41:14.045 13:25:16 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1258985 ']' 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:14.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:14.045 13:25:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:14.045 [2024-11-29 13:25:16.690002] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:41:14.045 [2024-11-29 13:25:16.690068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258985 ] 00:41:14.306 [2024-11-29 13:25:16.780604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:14.306 [2024-11-29 13:25:16.818731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.879 13:25:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:14.879 13:25:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:14.879 13:25:17 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zts8o4V0SZ 00:41:14.879 13:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zts8o4V0SZ 00:41:15.140 13:25:17 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BJRSSCKpAf 00:41:15.140 13:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BJRSSCKpAf 00:41:15.401 13:25:17 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:41:15.401 13:25:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:15.401 13:25:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:15.401 13:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:15.401 13:25:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:15.401 13:25:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Zts8o4V0SZ == \/\t\m\p\/\t\m\p\.\Z\t\s\8\o\4\V\0\S\Z ]] 00:41:15.401 13:25:17 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:41:15.401 13:25:17 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:41:15.401 13:25:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:15.401 13:25:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:15.401 13:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:15.662 13:25:18 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.BJRSSCKpAf == \/\t\m\p\/\t\m\p\.\B\J\R\S\S\C\K\p\A\f ]] 00:41:15.662 13:25:18 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:41:15.662 13:25:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:15.662 13:25:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:15.662 13:25:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:15.662 13:25:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:15.662 13:25:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:15.924 13:25:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:15.924 13:25:18 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:41:15.924 13:25:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:15.924 13:25:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:15.924 13:25:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:15.924 13:25:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:15.924 13:25:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:15.924 13:25:18 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:41:15.924 13:25:18 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:15.924 13:25:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:16.186 [2024-11-29 13:25:18.747452] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:16.186 nvme0n1 00:41:16.186 13:25:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:41:16.186 13:25:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:16.186 13:25:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:16.186 13:25:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:16.186 13:25:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:16.186 13:25:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:16.448 13:25:19 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:41:16.448 13:25:19 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:41:16.448 13:25:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:16.448 13:25:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:16.448 13:25:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:16.448 13:25:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:16.448 13:25:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:16.710 13:25:19 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:41:16.710 13:25:19 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:16.710 Running I/O for 1 seconds... 00:41:17.653 18214.00 IOPS, 71.15 MiB/s 00:41:17.653 Latency(us) 00:41:17.653 [2024-11-29T12:25:20.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:17.653 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:17.653 nvme0n1 : 1.00 18273.82 71.38 0.00 0.00 6992.15 2252.80 17039.36 00:41:17.653 [2024-11-29T12:25:20.333Z] =================================================================================================================== 00:41:17.653 [2024-11-29T12:25:20.333Z] Total : 18273.82 71.38 0.00 0.00 6992.15 2252.80 17039.36 00:41:17.653 { 00:41:17.653 "results": [ 00:41:17.653 { 00:41:17.653 "job": "nvme0n1", 00:41:17.653 "core_mask": "0x2", 00:41:17.653 "workload": "randrw", 00:41:17.653 "percentage": 50, 00:41:17.653 "status": "finished", 00:41:17.653 "queue_depth": 128, 00:41:17.653 "io_size": 4096, 00:41:17.653 "runtime": 1.003786, 00:41:17.653 "iops": 18273.815335141157, 00:41:17.653 "mibps": 71.38209115289514, 00:41:17.653 "io_failed": 0, 00:41:17.653 "io_timeout": 0, 00:41:17.653 "avg_latency_us": 6992.154251758164, 00:41:17.653 "min_latency_us": 2252.8, 00:41:17.653 "max_latency_us": 17039.36 00:41:17.653 } 00:41:17.653 ], 00:41:17.653 "core_count": 1 00:41:17.653 } 00:41:17.915 13:25:20 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:17.915 13:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:17.915 13:25:20 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:41:17.915 13:25:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:17.915 13:25:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:17.915 13:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:17.915 13:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:17.915 13:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:18.177 13:25:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:18.177 13:25:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:41:18.177 13:25:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:18.177 13:25:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:18.177 13:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:18.177 13:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:18.177 13:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:18.438 13:25:20 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:41:18.438 13:25:20 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:18.438 13:25:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:18.438 13:25:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:18.438 13:25:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:18.438 13:25:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.438 13:25:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:18.438 13:25:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:18.438 13:25:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:18.438 13:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:18.438 [2024-11-29 13:25:21.050807] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:18.438 [2024-11-29 13:25:21.051636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36c50 (107): Transport endpoint is not connected 00:41:18.438 [2024-11-29 13:25:21.052632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36c50 (9): Bad file descriptor 00:41:18.438 [2024-11-29 13:25:21.053634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:41:18.438 [2024-11-29 13:25:21.053641] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:18.438 [2024-11-29 13:25:21.053646] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:41:18.438 [2024-11-29 13:25:21.053653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:41:18.438 request: 00:41:18.438 { 00:41:18.438 "name": "nvme0", 00:41:18.438 "trtype": "tcp", 00:41:18.438 "traddr": "127.0.0.1", 00:41:18.438 "adrfam": "ipv4", 00:41:18.438 "trsvcid": "4420", 00:41:18.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:18.438 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:18.438 "prchk_reftag": false, 00:41:18.438 "prchk_guard": false, 00:41:18.438 "hdgst": false, 00:41:18.438 "ddgst": false, 00:41:18.438 "psk": "key1", 00:41:18.438 "allow_unrecognized_csi": false, 00:41:18.438 "method": "bdev_nvme_attach_controller", 00:41:18.438 "req_id": 1 00:41:18.438 } 00:41:18.438 Got JSON-RPC error response 00:41:18.438 response: 00:41:18.438 { 00:41:18.438 "code": -5, 00:41:18.438 "message": "Input/output error" 00:41:18.438 } 00:41:18.438 13:25:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:18.438 13:25:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:18.438 13:25:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:18.438 13:25:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:18.438 13:25:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:41:18.438 13:25:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:18.438 13:25:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:18.438 13:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:18.438 13:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:18.438 13:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:18.698 13:25:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:41:18.698 13:25:21 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:41:18.698 13:25:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:18.698 13:25:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:18.698 13:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:18.699 13:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:18.699 13:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:18.959 13:25:21 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:41:18.959 13:25:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:41:18.959 13:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:18.959 13:25:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:41:18.959 13:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:41:19.220 13:25:21 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:41:19.220 13:25:21 keyring_file -- keyring/file.sh@78 -- # jq length 00:41:19.220 13:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:19.481 13:25:21 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:41:19.481 13:25:21 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Zts8o4V0SZ 00:41:19.481 13:25:21 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zts8o4V0SZ 00:41:19.481 13:25:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:19.481 13:25:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zts8o4V0SZ 00:41:19.481 13:25:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:19.481 13:25:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.481 13:25:21 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:19.481 13:25:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:19.481 13:25:21 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zts8o4V0SZ 00:41:19.481 13:25:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zts8o4V0SZ 00:41:19.481 [2024-11-29 13:25:22.110390] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Zts8o4V0SZ': 0100660 00:41:19.481 [2024-11-29 13:25:22.110407] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:41:19.481 request: 00:41:19.481 { 00:41:19.481 "name": "key0", 00:41:19.481 "path": "/tmp/tmp.Zts8o4V0SZ", 00:41:19.481 "method": "keyring_file_add_key", 00:41:19.481 "req_id": 1 00:41:19.481 } 00:41:19.481 Got JSON-RPC error response 00:41:19.481 response: 00:41:19.481 { 00:41:19.481 "code": -1, 00:41:19.481 "message": "Operation not permitted" 00:41:19.481 } 00:41:19.481 13:25:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:19.481 13:25:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:19.481 13:25:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:19.481 13:25:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:19.481 13:25:22 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Zts8o4V0SZ 00:41:19.481 13:25:22 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zts8o4V0SZ 00:41:19.481 13:25:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zts8o4V0SZ 00:41:19.742 13:25:22 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Zts8o4V0SZ 00:41:19.742 13:25:22 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:41:19.742 13:25:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:19.742 13:25:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:19.742 13:25:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:19.742 13:25:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:19.742 13:25:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:20.003 13:25:22 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:41:20.003 13:25:22 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:20.003 13:25:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:20.003 13:25:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:20.003 13:25:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:20.003 13:25:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.003 13:25:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:20.003 13:25:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.003 13:25:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:20.003 13:25:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:20.003 [2024-11-29 13:25:22.635720] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Zts8o4V0SZ': No such file or directory 00:41:20.004 [2024-11-29 13:25:22.635733] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:41:20.004 [2024-11-29 13:25:22.635747] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:41:20.004 [2024-11-29 13:25:22.635753] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:41:20.004 [2024-11-29 13:25:22.635763] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:20.004 [2024-11-29 13:25:22.635768] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:41:20.004 request: 00:41:20.004 { 00:41:20.004 "name": "nvme0", 00:41:20.004 "trtype": "tcp", 00:41:20.004 "traddr": "127.0.0.1", 00:41:20.004 "adrfam": "ipv4", 00:41:20.004 "trsvcid": "4420", 00:41:20.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:20.004 "prchk_reftag": false, 00:41:20.004 "prchk_guard": false, 00:41:20.004 "hdgst": false, 00:41:20.004 "ddgst": false, 00:41:20.004 "psk": "key0", 00:41:20.004 "allow_unrecognized_csi": false, 00:41:20.004 "method": "bdev_nvme_attach_controller", 00:41:20.004 "req_id": 1 00:41:20.004 } 00:41:20.004 Got JSON-RPC error response 00:41:20.004 response: 00:41:20.004 { 00:41:20.004 "code": -19, 00:41:20.004 "message": "No such device" 00:41:20.004 } 00:41:20.004 13:25:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:20.004 13:25:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:20.004 13:25:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:20.004 13:25:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:20.004 13:25:22 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:41:20.004 13:25:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:20.265 13:25:22 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dJgqIsUnX5 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:20.265 13:25:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:20.265 13:25:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:20.265 13:25:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:20.265 13:25:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:20.265 13:25:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:20.265 13:25:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dJgqIsUnX5 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dJgqIsUnX5 00:41:20.265 13:25:22 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.dJgqIsUnX5 00:41:20.265 13:25:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dJgqIsUnX5 00:41:20.265 13:25:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dJgqIsUnX5 00:41:20.526 13:25:23 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:20.526 13:25:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:20.787 nvme0n1 00:41:20.787 13:25:23 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:41:20.787 13:25:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:20.787 13:25:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:20.787 13:25:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:20.787 13:25:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:20.787 13:25:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:21.048 13:25:23 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:41:21.048 13:25:23 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:41:21.048 13:25:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:21.048 13:25:23 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:41:21.048 13:25:23 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:41:21.048 13:25:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:21.048 13:25:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:21.048 13:25:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:21.309 13:25:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:41:21.309 13:25:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:41:21.309 13:25:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:21.309 13:25:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:21.309 13:25:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:21.309 13:25:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:21.309 13:25:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:21.569 13:25:24 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:41:21.569 13:25:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:21.569 13:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:21.569 13:25:24 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:41:21.569 13:25:24 keyring_file -- keyring/file.sh@105 -- # jq length 00:41:21.569 13:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:21.829 13:25:24 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:41:21.829 13:25:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dJgqIsUnX5 00:41:21.829 13:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dJgqIsUnX5 00:41:22.106 13:25:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BJRSSCKpAf 00:41:22.106 13:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BJRSSCKpAf 00:41:22.106 13:25:24 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:22.106 13:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:22.468 nvme0n1 00:41:22.468 13:25:24 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:41:22.468 13:25:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:22.754 13:25:25 keyring_file -- keyring/file.sh@113 -- # config='{ 00:41:22.754 "subsystems": [ 00:41:22.754 { 00:41:22.754 "subsystem": "keyring", 00:41:22.755 "config": [ 00:41:22.755 { 00:41:22.755 "method": "keyring_file_add_key", 00:41:22.755 "params": { 00:41:22.755 "name": "key0", 00:41:22.755 "path": "/tmp/tmp.dJgqIsUnX5" 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "keyring_file_add_key", 00:41:22.755 "params": { 00:41:22.755 "name": "key1", 00:41:22.755 "path": "/tmp/tmp.BJRSSCKpAf" 00:41:22.755 } 00:41:22.755 } 00:41:22.755 ] 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "subsystem": "iobuf", 00:41:22.755 "config": [ 00:41:22.755 { 00:41:22.755 "method": "iobuf_set_options", 00:41:22.755 "params": { 00:41:22.755 "small_pool_count": 8192, 00:41:22.755 "large_pool_count": 1024, 00:41:22.755 "small_bufsize": 8192, 00:41:22.755 "large_bufsize": 135168, 00:41:22.755 "enable_numa": false 00:41:22.755 } 00:41:22.755 } 00:41:22.755 ] 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "subsystem": "sock", 00:41:22.755 "config": [ 00:41:22.755 { 00:41:22.755 "method": "sock_set_default_impl", 00:41:22.755 "params": { 00:41:22.755 "impl_name": "posix" 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "sock_impl_set_options", 00:41:22.755 "params": { 00:41:22.755 "impl_name": "ssl", 00:41:22.755 "recv_buf_size": 4096, 00:41:22.755 "send_buf_size": 4096, 00:41:22.755 "enable_recv_pipe": true, 00:41:22.755 "enable_quickack": false, 00:41:22.755 "enable_placement_id": 0, 00:41:22.755 "enable_zerocopy_send_server": true, 00:41:22.755 "enable_zerocopy_send_client": false, 00:41:22.755 "zerocopy_threshold": 0, 00:41:22.755 "tls_version": 0, 00:41:22.755 "enable_ktls": false 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "sock_impl_set_options", 00:41:22.755 "params": { 00:41:22.755 "impl_name": "posix", 00:41:22.755 "recv_buf_size": 2097152, 00:41:22.755 "send_buf_size": 2097152, 00:41:22.755 "enable_recv_pipe": true, 00:41:22.755 "enable_quickack": false, 00:41:22.755 "enable_placement_id": 0, 00:41:22.755 "enable_zerocopy_send_server": true, 00:41:22.755 "enable_zerocopy_send_client": false, 00:41:22.755 "zerocopy_threshold": 0, 00:41:22.755 "tls_version": 0, 00:41:22.755 "enable_ktls": false 00:41:22.755 } 00:41:22.755 } 00:41:22.755 ] 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "subsystem": "vmd", 00:41:22.755 "config": [] 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "subsystem": "accel", 00:41:22.755 "config": [ 00:41:22.755 { 00:41:22.755 "method": "accel_set_options", 00:41:22.755 "params": { 00:41:22.755 "small_cache_size": 128, 00:41:22.755 "large_cache_size": 16, 00:41:22.755 "task_count": 2048, 00:41:22.755 "sequence_count": 2048, 00:41:22.755 "buf_count": 2048 00:41:22.755 } 00:41:22.755 } 00:41:22.755 ] 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "subsystem": "bdev", 00:41:22.755 "config": [ 00:41:22.755 { 00:41:22.755 "method": "bdev_set_options", 00:41:22.755 "params": { 00:41:22.755 "bdev_io_pool_size": 65535, 00:41:22.755 "bdev_io_cache_size": 256, 00:41:22.755 "bdev_auto_examine": true, 00:41:22.755 "iobuf_small_cache_size": 128, 00:41:22.755 "iobuf_large_cache_size": 16 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "bdev_raid_set_options", 00:41:22.755 "params": { 00:41:22.755 "process_window_size_kb": 1024, 00:41:22.755 "process_max_bandwidth_mb_sec": 0 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "bdev_iscsi_set_options", 00:41:22.755 "params": { 00:41:22.755 "timeout_sec": 30 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "bdev_nvme_set_options", 00:41:22.755 "params": { 00:41:22.755 "action_on_timeout": "none", 00:41:22.755 "timeout_us": 0, 00:41:22.755 "timeout_admin_us": 0, 00:41:22.755 "keep_alive_timeout_ms": 10000, 00:41:22.755 "arbitration_burst": 0, 00:41:22.755 "low_priority_weight": 0, 00:41:22.755 "medium_priority_weight": 0, 00:41:22.755 "high_priority_weight": 0, 00:41:22.755 "nvme_adminq_poll_period_us": 10000, 00:41:22.755 "nvme_ioq_poll_period_us": 0, 00:41:22.755 "io_queue_requests": 512, 00:41:22.755 "delay_cmd_submit": true, 00:41:22.755 "transport_retry_count": 4, 00:41:22.755 "bdev_retry_count": 3, 00:41:22.755 "transport_ack_timeout": 0, 00:41:22.755 "ctrlr_loss_timeout_sec": 0, 00:41:22.755 "reconnect_delay_sec": 0, 00:41:22.755 "fast_io_fail_timeout_sec": 0, 00:41:22.755 "disable_auto_failback": false, 00:41:22.755 "generate_uuids": false, 00:41:22.755 "transport_tos": 0, 00:41:22.755 "nvme_error_stat": false, 00:41:22.755 "rdma_srq_size": 0, 00:41:22.755 "io_path_stat": false, 00:41:22.755 "allow_accel_sequence": false, 00:41:22.755 "rdma_max_cq_size": 0, 00:41:22.755 "rdma_cm_event_timeout_ms": 0, 00:41:22.755 "dhchap_digests": [ 00:41:22.755 "sha256", 00:41:22.755 "sha384", 00:41:22.755 "sha512" 00:41:22.755 ], 00:41:22.755 "dhchap_dhgroups": [ 00:41:22.755 "null", 00:41:22.755 "ffdhe2048", 00:41:22.755 "ffdhe3072", 00:41:22.755 "ffdhe4096", 00:41:22.755 "ffdhe6144", 00:41:22.755 "ffdhe8192" 00:41:22.755 ] 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "bdev_nvme_attach_controller", 00:41:22.755 "params": { 00:41:22.755 "name": "nvme0", 00:41:22.755 "trtype": "TCP", 00:41:22.755 "adrfam": "IPv4", 00:41:22.755 "traddr": "127.0.0.1", 00:41:22.755 "trsvcid": "4420", 00:41:22.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:22.755 "prchk_reftag": false, 00:41:22.755 "prchk_guard": false, 00:41:22.755 "ctrlr_loss_timeout_sec": 0, 00:41:22.755 "reconnect_delay_sec": 0, 00:41:22.755 "fast_io_fail_timeout_sec": 0, 00:41:22.755 "psk": "key0", 00:41:22.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:22.755 "hdgst": false, 00:41:22.755 "ddgst": false, 00:41:22.755 "multipath": "multipath" 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "bdev_nvme_set_hotplug", 00:41:22.755 "params": { 00:41:22.755 "period_us": 100000, 00:41:22.755 "enable": false 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "bdev_wait_for_examine" 00:41:22.755 } 00:41:22.755 ] 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "subsystem": "nbd", 00:41:22.755 "config": [] 00:41:22.755 } 00:41:22.755 ] 00:41:22.755 }' 00:41:22.755 13:25:25 keyring_file -- keyring/file.sh@115 -- # killprocess 1258985 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1258985 ']' 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1258985 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1258985 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1258985' 00:41:22.755 killing process with pid 1258985 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@973 -- # kill 1258985 00:41:22.755 Received shutdown signal, test time was about 1.000000 seconds 00:41:22.755 00:41:22.755 Latency(us) 00:41:22.755 [2024-11-29T12:25:25.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:22.755 [2024-11-29T12:25:25.435Z] =================================================================================================================== 00:41:22.755 [2024-11-29T12:25:25.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@978 -- # wait 1258985 00:41:22.755 13:25:25 keyring_file -- keyring/file.sh@118 -- # bperfpid=1260796 00:41:22.755 13:25:25 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1260796 /var/tmp/bperf.sock 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1260796 ']' 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:22.755 13:25:25 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:22.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:22.755 13:25:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:22.755 13:25:25 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:41:22.755 "subsystems": [ 00:41:22.755 { 00:41:22.755 "subsystem": "keyring", 00:41:22.755 "config": [ 00:41:22.755 { 00:41:22.755 "method": "keyring_file_add_key", 00:41:22.755 "params": { 00:41:22.755 "name": "key0", 00:41:22.755 "path": "/tmp/tmp.dJgqIsUnX5" 00:41:22.755 } 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "method": "keyring_file_add_key", 00:41:22.755 "params": { 00:41:22.755 "name": "key1", 00:41:22.755 "path": "/tmp/tmp.BJRSSCKpAf" 00:41:22.755 } 00:41:22.755 } 00:41:22.755 ] 00:41:22.755 }, 00:41:22.755 { 00:41:22.755 "subsystem": "iobuf", 00:41:22.755 "config": [ 00:41:22.755 { 00:41:22.755 "method": "iobuf_set_options", 00:41:22.755 "params": { 00:41:22.755 "small_pool_count": 8192, 00:41:22.755 "large_pool_count": 1024, 00:41:22.755 "small_bufsize": 8192, 00:41:22.756 "large_bufsize": 135168, 00:41:22.756 "enable_numa": false 00:41:22.756 } 00:41:22.756 } 00:41:22.756 ] 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "subsystem": "sock", 00:41:22.756 "config": [ 00:41:22.756 { 00:41:22.756 "method": "sock_set_default_impl", 00:41:22.756 "params": { 00:41:22.756 "impl_name": "posix" 00:41:22.756 } 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "method": "sock_impl_set_options", 00:41:22.756 "params": { 00:41:22.756 "impl_name": "ssl", 00:41:22.756 "recv_buf_size": 4096, 00:41:22.756 "send_buf_size": 4096, 00:41:22.756 "enable_recv_pipe": true, 00:41:22.756 "enable_quickack": false, 00:41:22.756 "enable_placement_id": 0, 00:41:22.756 "enable_zerocopy_send_server": true, 00:41:22.756 "enable_zerocopy_send_client": false, 00:41:22.756 "zerocopy_threshold": 0, 00:41:22.756 "tls_version": 0, 00:41:22.756 "enable_ktls": false 00:41:22.756 } 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "method": "sock_impl_set_options", 00:41:22.756 "params": { 00:41:22.756 "impl_name": "posix", 00:41:22.756 "recv_buf_size": 2097152, 00:41:22.756 "send_buf_size": 2097152, 00:41:22.756 "enable_recv_pipe": true, 00:41:22.756 "enable_quickack": false, 00:41:22.756 "enable_placement_id": 0, 00:41:22.756 "enable_zerocopy_send_server": true, 00:41:22.756 "enable_zerocopy_send_client": false, 00:41:22.756 "zerocopy_threshold": 0, 00:41:22.756 "tls_version": 0, 00:41:22.756 "enable_ktls": false 00:41:22.756 } 00:41:22.756 } 00:41:22.756 ] 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "subsystem": "vmd", 00:41:22.756 "config": [] 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "subsystem": "accel", 00:41:22.756 "config": [ 00:41:22.756 { 00:41:22.756 "method": "accel_set_options", 00:41:22.756 "params": { 00:41:22.756 "small_cache_size": 128, 00:41:22.756 "large_cache_size": 16, 00:41:22.756 "task_count": 2048, 00:41:22.756 "sequence_count": 2048, 00:41:22.756 "buf_count": 2048 00:41:22.756 } 00:41:22.756 } 00:41:22.756 ] 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "subsystem": "bdev", 00:41:22.756 "config": [ 00:41:22.756 { 00:41:22.756 "method": "bdev_set_options", 00:41:22.756 "params": { 00:41:22.756 "bdev_io_pool_size": 65535, 00:41:22.756 "bdev_io_cache_size": 256, 00:41:22.756 "bdev_auto_examine": true, 00:41:22.756 "iobuf_small_cache_size": 128, 00:41:22.756 "iobuf_large_cache_size": 16 00:41:22.756 } 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "method": "bdev_raid_set_options", 00:41:22.756 "params": { 00:41:22.756 "process_window_size_kb": 1024, 00:41:22.756 "process_max_bandwidth_mb_sec": 0 00:41:22.756 } 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "method": "bdev_iscsi_set_options", 00:41:22.756 "params": { 00:41:22.756 "timeout_sec": 30 00:41:22.756 } 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "method": "bdev_nvme_set_options", 00:41:22.756 "params": { 00:41:22.756 "action_on_timeout": "none", 00:41:22.756 "timeout_us": 0, 00:41:22.756 "timeout_admin_us": 0, 00:41:22.756 "keep_alive_timeout_ms": 10000, 00:41:22.756 "arbitration_burst": 0, 00:41:22.756 "low_priority_weight": 0, 00:41:22.756 "medium_priority_weight": 0, 00:41:22.756 "high_priority_weight": 0, 00:41:22.756 "nvme_adminq_poll_period_us": 10000, 00:41:22.756 "nvme_ioq_poll_period_us": 0, 00:41:22.756 "io_queue_requests": 512, 00:41:22.756 "delay_cmd_submit": true, 00:41:22.756 "transport_retry_count": 4, 00:41:22.756 "bdev_retry_count": 3, 00:41:22.756 "transport_ack_timeout": 0, 00:41:22.756 "ctrlr_loss_timeout_sec": 0, 00:41:22.756 "reconnect_delay_sec": 0, 00:41:22.756 "fast_io_fail_timeout_sec": 0, 00:41:22.756 "disable_auto_failback": false, 00:41:22.756 "generate_uuids": false, 00:41:22.756 "transport_tos": 0, 00:41:22.756 "nvme_error_stat": false, 00:41:22.756 "rdma_srq_size": 0, 00:41:22.756 13:25:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:22.756 "io_path_stat": false, 00:41:22.756 "allow_accel_sequence": false, 00:41:22.756 "rdma_max_cq_size": 0, 00:41:22.756 "rdma_cm_event_timeout_ms": 0, 00:41:22.756 "dhchap_digests": [ 00:41:22.756 "sha256", 00:41:22.756 "sha384", 00:41:22.756 "sha512" 00:41:22.756 ], 00:41:22.756 "dhchap_dhgroups": [ 00:41:22.756 "null", 00:41:22.756 "ffdhe2048", 00:41:22.756 "ffdhe3072", 00:41:22.756 "ffdhe4096", 00:41:22.756 "ffdhe6144", 00:41:22.756 "ffdhe8192" 00:41:22.756 ] 00:41:22.756 } 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "method": "bdev_nvme_attach_controller", 00:41:22.756 "params": { 00:41:22.756 "name": "nvme0", 00:41:22.756 "trtype": "TCP", 00:41:22.756 "adrfam": "IPv4", 00:41:22.756 "traddr": "127.0.0.1", 00:41:22.756 "trsvcid": "4420", 00:41:22.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:22.756 "prchk_reftag": false, 00:41:22.756 "prchk_guard": false, 00:41:22.756 "ctrlr_loss_timeout_sec": 0, 00:41:22.756 "reconnect_delay_sec": 0, 00:41:22.756 "fast_io_fail_timeout_sec": 0, 00:41:22.756 "psk": "key0", 00:41:22.756 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:22.756 "hdgst": false, 00:41:22.756 "ddgst": false, 00:41:22.756 "multipath": "multipath" 00:41:22.756 } 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "method": "bdev_nvme_set_hotplug", 00:41:22.756 "params": { 00:41:22.756 "period_us": 100000, 00:41:22.756 "enable": false 00:41:22.756 } 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "method": "bdev_wait_for_examine" 00:41:22.756 } 00:41:22.756 ] 00:41:22.756 }, 00:41:22.756 { 00:41:22.756 "subsystem": "nbd", 00:41:22.756 "config": [] 00:41:22.756 } 00:41:22.756 ] 00:41:22.756 }' 00:41:22.756 [2024-11-29 13:25:25.410365] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:41:22.756 [2024-11-29 13:25:25.410422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260796 ] 00:41:23.017 [2024-11-29 13:25:25.491782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.017 [2024-11-29 13:25:25.520792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:23.017 [2024-11-29 13:25:25.664649] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:23.587 13:25:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:23.587 13:25:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:23.587 13:25:26 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:41:23.587 13:25:26 keyring_file -- keyring/file.sh@121 -- # jq length 00:41:23.587 13:25:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:23.846 13:25:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:41:23.847 13:25:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:41:23.847 13:25:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:23.847 13:25:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:23.847 13:25:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:23.847 13:25:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:23.847 13:25:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:24.106 13:25:26 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:41:24.106 13:25:26 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:41:24.106 13:25:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:24.106 13:25:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:24.106 13:25:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:24.106 13:25:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:24.106 13:25:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:24.106 13:25:26 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:41:24.106 13:25:26 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:41:24.106 13:25:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:41:24.106 13:25:26 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:41:24.367 13:25:26 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:41:24.367 13:25:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:41:24.367 13:25:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dJgqIsUnX5 /tmp/tmp.BJRSSCKpAf 00:41:24.367 13:25:26 keyring_file -- keyring/file.sh@20 -- # killprocess 1260796 00:41:24.367 13:25:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1260796 ']' 00:41:24.367 13:25:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1260796 00:41:24.367 13:25:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:24.367 13:25:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:24.367 13:25:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1260796 00:41:24.367 13:25:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:24.367 13:25:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:24.367 13:25:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1260796' 00:41:24.367 killing process with pid 1260796 00:41:24.367 13:25:26 keyring_file -- common/autotest_common.sh@973 -- # kill 1260796 00:41:24.367 Received shutdown signal, test time was about 1.000000 seconds 00:41:24.367 00:41:24.368 Latency(us) 00:41:24.368 [2024-11-29T12:25:27.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:24.368 [2024-11-29T12:25:27.048Z] =================================================================================================================== 00:41:24.368 [2024-11-29T12:25:27.048Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:24.368 13:25:26 keyring_file -- common/autotest_common.sh@978 -- # wait 1260796 00:41:24.628 13:25:27 keyring_file -- keyring/file.sh@21 -- # killprocess 1258956 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1258956 ']' 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1258956 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1258956 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1258956' 00:41:24.628 killing process with pid 1258956 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@973 -- # kill 1258956 00:41:24.628 13:25:27 keyring_file -- common/autotest_common.sh@978 -- # wait 1258956 00:41:24.888 00:41:24.888 real 0m12.003s 00:41:24.888 user 0m28.997s 00:41:24.888 sys 0m2.699s 00:41:24.888 13:25:27 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:24.888 13:25:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:24.888 ************************************ 00:41:24.888 END TEST keyring_file 00:41:24.888 ************************************ 00:41:24.888 13:25:27 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:41:24.888 13:25:27 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:24.888 13:25:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:24.888 13:25:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:24.888 13:25:27 -- common/autotest_common.sh@10 -- # set +x 00:41:24.888 ************************************ 00:41:24.888 START TEST keyring_linux 00:41:24.888 ************************************ 00:41:24.888 13:25:27 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:24.888 Joined session keyring: 586356114 00:41:24.888 * Looking for test storage... 00:41:24.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:24.888 13:25:27 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:24.888 13:25:27 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:41:24.888 13:25:27 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:25.150 13:25:27 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@345 -- # : 1 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:25.150 13:25:27 keyring_linux -- scripts/common.sh@368 -- # return 0 00:41:25.150 13:25:27 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:25.150 13:25:27 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:25.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:25.150 --rc genhtml_branch_coverage=1 00:41:25.150 --rc genhtml_function_coverage=1 00:41:25.150 --rc genhtml_legend=1 00:41:25.150 --rc geninfo_all_blocks=1 00:41:25.150 --rc geninfo_unexecuted_blocks=1 00:41:25.150 00:41:25.150 ' 00:41:25.150 13:25:27 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:25.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:25.150 --rc genhtml_branch_coverage=1 00:41:25.150 --rc genhtml_function_coverage=1 00:41:25.150 --rc genhtml_legend=1 00:41:25.150 --rc geninfo_all_blocks=1 00:41:25.150 --rc geninfo_unexecuted_blocks=1 00:41:25.150 00:41:25.150 ' 00:41:25.150 13:25:27 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:25.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:25.150 --rc genhtml_branch_coverage=1 00:41:25.150 --rc genhtml_function_coverage=1 00:41:25.150 --rc genhtml_legend=1 00:41:25.150 --rc geninfo_all_blocks=1 00:41:25.150 --rc geninfo_unexecuted_blocks=1 00:41:25.150 00:41:25.150 ' 00:41:25.150 13:25:27 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:25.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:25.150 --rc genhtml_branch_coverage=1 00:41:25.150 --rc genhtml_function_coverage=1 00:41:25.150 --rc genhtml_legend=1 00:41:25.150 --rc geninfo_all_blocks=1 00:41:25.150 --rc geninfo_unexecuted_blocks=1 00:41:25.150 00:41:25.150 ' 00:41:25.150 13:25:27 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:25.151 13:25:27 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:41:25.151 13:25:27 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:25.151 13:25:27 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:25.151 13:25:27 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:25.151 13:25:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.151 13:25:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.151 13:25:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.151 13:25:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:41:25.151 13:25:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:25.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:41:25.151 /tmp/:spdk-test:key0 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:41:25.151 13:25:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:41:25.151 13:25:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:41:25.151 /tmp/:spdk-test:key1 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1261257 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1261257 00:41:25.151 13:25:27 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:25.151 13:25:27 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1261257 ']' 00:41:25.151 13:25:27 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:25.151 13:25:27 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:25.151 13:25:27 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:25.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:25.151 13:25:27 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:25.151 13:25:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:25.151 [2024-11-29 13:25:27.810493] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:41:25.151 [2024-11-29 13:25:27.810567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261257 ] 00:41:25.412 [2024-11-29 13:25:27.897761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:25.412 [2024-11-29 13:25:27.939315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:25.982 13:25:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:25.982 13:25:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:41:25.982 13:25:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:41:25.982 13:25:28 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.982 13:25:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:25.982 [2024-11-29 13:25:28.604336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:25.982 null0 00:41:25.982 [2024-11-29 13:25:28.636392] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:25.982 [2024-11-29 13:25:28.636701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:25.982 13:25:28 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.982 13:25:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:41:26.242 829928926 00:41:26.242 13:25:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:41:26.242 64331603 00:41:26.242 13:25:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1261573 00:41:26.242 13:25:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1261573 /var/tmp/bperf.sock 00:41:26.242 13:25:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:41:26.242 13:25:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1261573 ']' 00:41:26.242 13:25:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:26.242 13:25:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:26.242 13:25:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:26.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:26.242 13:25:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:26.242 13:25:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:26.242 [2024-11-29 13:25:28.721787] Starting SPDK v25.01-pre git sha1 da516d862 / DPDK 24.03.0 initialization... 00:41:26.243 [2024-11-29 13:25:28.721837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261573 ] 00:41:26.243 [2024-11-29 13:25:28.805059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:26.243 [2024-11-29 13:25:28.834578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:27.184 13:25:29 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:27.184 13:25:29 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:41:27.184 13:25:29 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:41:27.184 13:25:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:41:27.184 13:25:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:41:27.184 13:25:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:27.445 13:25:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:27.445 13:25:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:27.445 [2024-11-29 13:25:30.063867] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:27.706 nvme0n1 00:41:27.706 13:25:30 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:41:27.706 13:25:30 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:41:27.706 13:25:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:27.706 13:25:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:27.706 13:25:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:27.706 13:25:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:27.706 13:25:30 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:41:27.706 13:25:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:27.706 13:25:30 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:41:27.706 13:25:30 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:41:27.706 13:25:30 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:41:27.706 13:25:30 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:27.706 13:25:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:27.966 13:25:30 keyring_linux -- keyring/linux.sh@25 -- # sn=829928926 00:41:27.966 13:25:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:41:27.966 13:25:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:27.966 13:25:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 829928926 == \8\2\9\9\2\8\9\2\6 ]] 00:41:27.966 13:25:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 829928926 00:41:27.966 13:25:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:41:27.966 13:25:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:27.966 Running I/O for 1 seconds... 00:41:29.351 24220.00 IOPS, 94.61 MiB/s 00:41:29.351 Latency(us) 00:41:29.351 [2024-11-29T12:25:32.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:29.351 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:29.351 nvme0n1 : 1.01 24220.19 94.61 0.00 0.00 5269.30 1788.59 6471.68 00:41:29.351 [2024-11-29T12:25:32.031Z] =================================================================================================================== 00:41:29.351 [2024-11-29T12:25:32.031Z] Total : 24220.19 94.61 0.00 0.00 5269.30 1788.59 6471.68 00:41:29.351 { 00:41:29.351 "results": [ 00:41:29.351 { 00:41:29.351 "job": "nvme0n1", 00:41:29.351 "core_mask": "0x2", 00:41:29.351 "workload": "randread", 00:41:29.351 "status": "finished", 00:41:29.351 "queue_depth": 128, 00:41:29.351 "io_size": 4096, 00:41:29.351 "runtime": 1.005277, 00:41:29.351 "iops": 24220.19005706885, 00:41:29.351 "mibps": 94.61011741042519, 00:41:29.351 "io_failed": 0, 00:41:29.351 "io_timeout": 0, 00:41:29.351 "avg_latency_us": 5269.298426154099, 00:41:29.351 "min_latency_us": 1788.5866666666666, 00:41:29.351 "max_latency_us": 6471.68 00:41:29.351 } 00:41:29.351 ], 00:41:29.351 "core_count": 1 00:41:29.351 } 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:29.351 13:25:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:29.351 13:25:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:41:29.351 13:25:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:29.351 13:25:31 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:41:29.351 13:25:31 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:29.351 13:25:31 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:29.351 13:25:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:29.351 13:25:31 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:29.351 13:25:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:29.351 13:25:31 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:29.351 13:25:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:29.612 [2024-11-29 13:25:32.145924] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:29.612 [2024-11-29 13:25:32.146583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e09e0 (107): Transport endpoint is not connected 00:41:29.612 [2024-11-29 13:25:32.147579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e09e0 (9): Bad file descriptor 00:41:29.612 [2024-11-29 13:25:32.148581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:41:29.612 [2024-11-29 13:25:32.148588] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:29.612 [2024-11-29 13:25:32.148594] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:41:29.612 [2024-11-29 13:25:32.148601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:41:29.612 request: 00:41:29.612 { 00:41:29.612 "name": "nvme0", 00:41:29.612 "trtype": "tcp", 00:41:29.612 "traddr": "127.0.0.1", 00:41:29.612 "adrfam": "ipv4", 00:41:29.612 "trsvcid": "4420", 00:41:29.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:29.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:29.612 "prchk_reftag": false, 00:41:29.612 "prchk_guard": false, 00:41:29.612 "hdgst": false, 00:41:29.612 "ddgst": false, 00:41:29.612 "psk": ":spdk-test:key1", 00:41:29.612 "allow_unrecognized_csi": false, 00:41:29.612 "method": "bdev_nvme_attach_controller", 00:41:29.612 "req_id": 1 00:41:29.612 } 00:41:29.612 Got JSON-RPC error response 00:41:29.612 response: 00:41:29.612 { 00:41:29.612 "code": -5, 00:41:29.612 "message": "Input/output error" 00:41:29.612 } 00:41:29.612 13:25:32 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:41:29.612 13:25:32 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:29.612 13:25:32 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:29.612 13:25:32 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:29.612 13:25:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:41:29.612 13:25:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:29.612 13:25:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:41:29.612 13:25:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:41:29.612 13:25:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:41:29.612 13:25:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:29.612 13:25:32 keyring_linux -- keyring/linux.sh@33 -- # sn=829928926 00:41:29.612 13:25:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 829928926 00:41:29.613 1 links removed 00:41:29.613 13:25:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:29.613 13:25:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:41:29.613 13:25:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:41:29.613 13:25:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:41:29.613 13:25:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:41:29.613 13:25:32 keyring_linux -- keyring/linux.sh@33 -- # sn=64331603 00:41:29.613 13:25:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 64331603 00:41:29.613 1 links removed 00:41:29.613 13:25:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1261573 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1261573 ']' 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1261573 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1261573 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1261573' 00:41:29.613 killing process with pid 1261573 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 1261573 00:41:29.613 Received shutdown signal, test time was about 1.000000 seconds 00:41:29.613 00:41:29.613 Latency(us) 00:41:29.613 [2024-11-29T12:25:32.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:29.613 [2024-11-29T12:25:32.293Z] =================================================================================================================== 00:41:29.613 [2024-11-29T12:25:32.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:29.613 13:25:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 1261573 00:41:29.873 13:25:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1261257 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1261257 ']' 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1261257 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1261257 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1261257' 00:41:29.873 killing process with pid 1261257 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 1261257 00:41:29.873 13:25:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 1261257 00:41:30.135 00:41:30.135 real 0m5.189s 00:41:30.135 user 0m9.599s 00:41:30.135 sys 0m1.457s 00:41:30.135 13:25:32 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:30.135 13:25:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:30.135 ************************************ 00:41:30.135 END TEST keyring_linux 00:41:30.135 ************************************ 00:41:30.135 13:25:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:41:30.135 13:25:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:41:30.135 13:25:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:41:30.135 13:25:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:41:30.135 13:25:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:41:30.135 13:25:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:41:30.135 13:25:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:41:30.135 13:25:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:30.135 13:25:32 -- common/autotest_common.sh@10 -- # set +x 00:41:30.135 13:25:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:41:30.135 13:25:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:41:30.135 13:25:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:41:30.135 13:25:32 -- common/autotest_common.sh@10 -- # set +x 00:41:38.275 INFO: APP EXITING 00:41:38.275 INFO: killing all VMs 00:41:38.275 INFO: killing vhost app 00:41:38.275 WARN: no vhost pid file found 00:41:38.275 INFO: EXIT DONE 00:41:41.578 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:65:00.0 (144d a80a): Already using the nvme driver 00:41:41.578 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:41:41.578 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:41:45.783 Cleaning 00:41:45.783 Removing: /var/run/dpdk/spdk0/config 00:41:45.783 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:45.783 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:45.783 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:45.783 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:45.783 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:45.783 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:45.783 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:45.783 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:45.783 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:45.783 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:45.783 Removing: /var/run/dpdk/spdk1/config 00:41:45.783 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:45.783 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:45.783 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:45.783 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:45.783 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:45.783 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:45.783 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:45.783 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:45.783 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:45.783 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:45.783 Removing: /var/run/dpdk/spdk2/config 00:41:45.783 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:45.783 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:45.783 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:45.783 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:45.783 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:45.783 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:45.783 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:45.783 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:45.783 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:45.783 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:45.783 Removing: /var/run/dpdk/spdk3/config 00:41:45.783 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:45.783 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:45.783 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:45.783 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:45.783 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:45.783 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:45.783 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:45.783 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:45.783 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:45.783 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:45.783 Removing: /var/run/dpdk/spdk4/config 00:41:45.783 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:45.783 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:45.783 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:45.783 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:45.783 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:45.783 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:45.783 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:45.783 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:45.783 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:45.783 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:45.783 Removing: /dev/shm/bdev_svc_trace.1 00:41:45.783 Removing: /dev/shm/nvmf_trace.0 00:41:45.783 Removing: /dev/shm/spdk_tgt_trace.pid682598 00:41:45.783 Removing: /var/run/dpdk/spdk0 00:41:45.783 Removing: /var/run/dpdk/spdk1 00:41:45.783 Removing: /var/run/dpdk/spdk2 00:41:45.783 Removing: /var/run/dpdk/spdk3 00:41:45.783 Removing: /var/run/dpdk/spdk4 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1000738 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1005955 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1011000 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1020101 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1020139 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1025448 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1025686 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1025845 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1026487 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1026492 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1032032 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1032734 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1038681 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1041798 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1048507 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1055135 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1065382 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1073955 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1073957 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1097615 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1098399 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1099090 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1099772 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1100831 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1101519 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1102217 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1102997 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1108254 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1108597 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1115653 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1116037 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1122497 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1127531 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1139772 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1140441 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1145491 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1145845 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1150886 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1157791 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1160712 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1172970 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1183591 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1185572 00:41:45.783 Removing: /var/run/dpdk/spdk_pid1186693 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1206761 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1211485 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1214680 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1222428 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1222433 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1228415 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1230828 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1233025 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1234449 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1236737 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1238361 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1248787 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1249457 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1250121 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1253033 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1253483 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1254086 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1258956 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1258985 00:41:45.784 Removing: /var/run/dpdk/spdk_pid1260796 00:41:46.044 Removing: /var/run/dpdk/spdk_pid1261257 00:41:46.044 Removing: /var/run/dpdk/spdk_pid1261573 00:41:46.044 Removing: /var/run/dpdk/spdk_pid681019 00:41:46.044 Removing: /var/run/dpdk/spdk_pid682598 00:41:46.044 Removing: /var/run/dpdk/spdk_pid683781 00:41:46.044 Removing: /var/run/dpdk/spdk_pid684948 00:41:46.044 Removing: /var/run/dpdk/spdk_pid685288 00:41:46.044 Removing: /var/run/dpdk/spdk_pid686359 00:41:46.044 Removing: /var/run/dpdk/spdk_pid686589 00:41:46.044 Removing: /var/run/dpdk/spdk_pid686827 00:41:46.044 Removing: /var/run/dpdk/spdk_pid687965 00:41:46.044 Removing: /var/run/dpdk/spdk_pid688750 00:41:46.044 Removing: /var/run/dpdk/spdk_pid689128 00:41:46.044 Removing: /var/run/dpdk/spdk_pid689464 00:41:46.044 Removing: /var/run/dpdk/spdk_pid689830 00:41:46.044 Removing: /var/run/dpdk/spdk_pid690143 00:41:46.044 Removing: /var/run/dpdk/spdk_pid690405 00:41:46.044 Removing: /var/run/dpdk/spdk_pid690753 00:41:46.044 Removing: /var/run/dpdk/spdk_pid691146 00:41:46.044 Removing: /var/run/dpdk/spdk_pid692213 00:41:46.044 Removing: /var/run/dpdk/spdk_pid695799 00:41:46.044 Removing: /var/run/dpdk/spdk_pid695869 00:41:46.044 Removing: /var/run/dpdk/spdk_pid696208 00:41:46.044 Removing: /var/run/dpdk/spdk_pid696534 00:41:46.044 Removing: /var/run/dpdk/spdk_pid696909 00:41:46.044 Removing: /var/run/dpdk/spdk_pid697157 00:41:46.044 Removing: /var/run/dpdk/spdk_pid697616 00:41:46.044 Removing: /var/run/dpdk/spdk_pid697632 00:41:46.044 Removing: /var/run/dpdk/spdk_pid697995 00:41:46.044 Removing: /var/run/dpdk/spdk_pid698227 00:41:46.044 Removing: /var/run/dpdk/spdk_pid698372 00:41:46.044 Removing: /var/run/dpdk/spdk_pid698704 00:41:46.044 Removing: /var/run/dpdk/spdk_pid699156 00:41:46.044 Removing: /var/run/dpdk/spdk_pid699502 00:41:46.044 Removing: /var/run/dpdk/spdk_pid699797 00:41:46.044 Removing: /var/run/dpdk/spdk_pid704435 00:41:46.044 Removing: /var/run/dpdk/spdk_pid709820 00:41:46.044 Removing: /var/run/dpdk/spdk_pid721844 00:41:46.044 Removing: /var/run/dpdk/spdk_pid722526 00:41:46.044 Removing: /var/run/dpdk/spdk_pid727851 00:41:46.044 Removing: /var/run/dpdk/spdk_pid728283 00:41:46.044 Removing: /var/run/dpdk/spdk_pid733882 00:41:46.044 Removing: /var/run/dpdk/spdk_pid741008 00:41:46.044 Removing: /var/run/dpdk/spdk_pid744287 00:41:46.044 Removing: /var/run/dpdk/spdk_pid756908 00:41:46.044 Removing: /var/run/dpdk/spdk_pid767719 00:41:46.044 Removing: /var/run/dpdk/spdk_pid769892 00:41:46.044 Removing: /var/run/dpdk/spdk_pid771048 00:41:46.044 Removing: /var/run/dpdk/spdk_pid792619 00:41:46.044 Removing: /var/run/dpdk/spdk_pid797376 00:41:46.044 Removing: /var/run/dpdk/spdk_pid853879 00:41:46.044 Removing: /var/run/dpdk/spdk_pid860265 00:41:46.044 Removing: /var/run/dpdk/spdk_pid867437 00:41:46.044 Removing: /var/run/dpdk/spdk_pid875338 00:41:46.044 Removing: /var/run/dpdk/spdk_pid875347 00:41:46.044 Removing: /var/run/dpdk/spdk_pid876349 00:41:46.044 Removing: /var/run/dpdk/spdk_pid877353 00:41:46.044 Removing: /var/run/dpdk/spdk_pid878362 00:41:46.305 Removing: /var/run/dpdk/spdk_pid879032 00:41:46.305 Removing: /var/run/dpdk/spdk_pid879037 00:41:46.305 Removing: /var/run/dpdk/spdk_pid879369 00:41:46.305 Removing: /var/run/dpdk/spdk_pid879381 00:41:46.305 Removing: /var/run/dpdk/spdk_pid879437 00:41:46.305 Removing: /var/run/dpdk/spdk_pid880522 00:41:46.305 Removing: /var/run/dpdk/spdk_pid881545 00:41:46.305 Removing: /var/run/dpdk/spdk_pid882631 00:41:46.305 Removing: /var/run/dpdk/spdk_pid883240 00:41:46.305 Removing: /var/run/dpdk/spdk_pid883369 00:41:46.305 Removing: /var/run/dpdk/spdk_pid883609 00:41:46.305 Removing: /var/run/dpdk/spdk_pid885016 00:41:46.305 Removing: /var/run/dpdk/spdk_pid886252 00:41:46.305 Removing: /var/run/dpdk/spdk_pid896811 00:41:46.305 Removing: /var/run/dpdk/spdk_pid931488 00:41:46.305 Removing: /var/run/dpdk/spdk_pid936889 00:41:46.305 Removing: /var/run/dpdk/spdk_pid938889 00:41:46.305 Removing: /var/run/dpdk/spdk_pid941233 00:41:46.305 Removing: /var/run/dpdk/spdk_pid941428 00:41:46.305 Removing: /var/run/dpdk/spdk_pid941618 00:41:46.305 Removing: /var/run/dpdk/spdk_pid941946 00:41:46.305 Removing: /var/run/dpdk/spdk_pid942666 00:41:46.305 Removing: /var/run/dpdk/spdk_pid945005 00:41:46.305 Removing: /var/run/dpdk/spdk_pid946115 00:41:46.305 Removing: /var/run/dpdk/spdk_pid946802 00:41:46.305 Removing: /var/run/dpdk/spdk_pid949514 00:41:46.305 Removing: /var/run/dpdk/spdk_pid950218 00:41:46.305 Removing: /var/run/dpdk/spdk_pid950938 00:41:46.305 Removing: /var/run/dpdk/spdk_pid955990 00:41:46.305 Removing: /var/run/dpdk/spdk_pid962687 00:41:46.305 Removing: /var/run/dpdk/spdk_pid962688 00:41:46.305 Removing: /var/run/dpdk/spdk_pid962689 00:41:46.305 Removing: /var/run/dpdk/spdk_pid967385 00:41:46.305 Removing: /var/run/dpdk/spdk_pid977635 00:41:46.305 Removing: /var/run/dpdk/spdk_pid983020 00:41:46.305 Removing: /var/run/dpdk/spdk_pid990250 00:41:46.305 Removing: /var/run/dpdk/spdk_pid991749 00:41:46.305 Removing: /var/run/dpdk/spdk_pid993415 00:41:46.305 Removing: /var/run/dpdk/spdk_pid995113 00:41:46.305 Clean 00:41:46.566 13:25:48 -- common/autotest_common.sh@1453 -- # return 0 00:41:46.566 13:25:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:41:46.566 13:25:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:46.566 13:25:48 -- common/autotest_common.sh@10 -- # set +x 00:41:46.566 13:25:49 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:41:46.566 13:25:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:46.566 13:25:49 -- common/autotest_common.sh@10 -- # set +x 00:41:46.566 13:25:49 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:46.566 13:25:49 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:41:46.566 13:25:49 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:41:46.566 13:25:49 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:41:46.566 13:25:49 -- spdk/autotest.sh@398 -- # hostname 00:41:46.566 13:25:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:41:46.827 geninfo: WARNING: invalid characters removed from testname! 00:42:13.400 13:26:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:15.312 13:26:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:16.692 13:26:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:18.603 13:26:20 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:19.984 13:26:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:21.894 13:26:24 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:24.437 13:26:26 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:24.437 13:26:26 -- spdk/autorun.sh@1 -- $ timing_finish 00:42:24.437 13:26:26 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:42:24.437 13:26:26 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:24.437 13:26:26 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:42:24.437 13:26:26 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:24.437 + [[ -n 596472 ]] 00:42:24.438 + sudo kill 596472 00:42:24.449 [Pipeline] } 00:42:24.464 [Pipeline] // stage 00:42:24.469 [Pipeline] } 00:42:24.482 [Pipeline] // timeout 00:42:24.487 [Pipeline] } 00:42:24.502 [Pipeline] // catchError 00:42:24.507 [Pipeline] } 00:42:24.521 [Pipeline] // wrap 00:42:24.527 [Pipeline] } 00:42:24.539 [Pipeline] // catchError 00:42:24.547 [Pipeline] stage 00:42:24.550 [Pipeline] { (Epilogue) 00:42:24.562 [Pipeline] catchError 00:42:24.564 [Pipeline] { 00:42:24.576 [Pipeline] echo 00:42:24.577 Cleanup processes 00:42:24.583 [Pipeline] sh 00:42:24.872 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:24.872 1274568 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:24.887 [Pipeline] sh 00:42:25.174 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:25.174 ++ grep -v 'sudo pgrep' 00:42:25.174 ++ awk '{print $1}' 00:42:25.174 + sudo kill -9 00:42:25.174 + true 00:42:25.188 [Pipeline] sh 00:42:25.474 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:37.712 [Pipeline] sh 00:42:38.000 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:38.000 Artifacts sizes are good 00:42:38.015 [Pipeline] archiveArtifacts 00:42:38.024 Archiving artifacts 00:42:38.181 [Pipeline] sh 00:42:38.561 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:42:38.577 [Pipeline] cleanWs 00:42:38.588 [WS-CLEANUP] Deleting project workspace... 00:42:38.588 [WS-CLEANUP] Deferred wipeout is used... 00:42:38.595 [WS-CLEANUP] done 00:42:38.597 [Pipeline] } 00:42:38.614 [Pipeline] // catchError 00:42:38.626 [Pipeline] sh 00:42:38.911 + logger -p user.info -t JENKINS-CI 00:42:38.921 [Pipeline] } 00:42:38.935 [Pipeline] // stage 00:42:38.940 [Pipeline] } 00:42:38.955 [Pipeline] // node 00:42:38.960 [Pipeline] End of Pipeline 00:42:38.990 Finished: SUCCESS